Hitters still dig the long ball

I don’t want to make too much of this, but here it is:

  • In 1998, an expansion year when both Mark McGwire and Sammy Sosa surpassed the season home run record with 70 and 66, respectively, the home run rate across the major leagues was 2.7% of all plate appearances.
  • In 2012, after 10 years of random P.E.D. testing, Miguel Cabrera led the majors with 44 HRs, and the home run rate was … still 2.7% of all plate appearances.


Outside of the PED era, only 1987 saw a higher HR% than last season. And while the 2010-11 HR% was a bit lower (2.5% each year), that figure is still higher than any year but 1961, 1987 and the PED era.

A couple of notes before we go on:

  • The HR peak of the PED era was not 1998 (2.7%), but 2000 (3.0%). 1998 was about average for the period.
  • HRs per game in 2012 were slightly below the 1998 level (1.02 vs. 1.04) — but only because there were slightly fewer PAs per game, due to declines in batting average (.266 to .255) and walks per game (3.38 to 3.03).

Comparing 1998 and 2012, HRs per hit went up from 11.4% to 11.7%, and HRs per batted ball went up from 3.7% to 3.8%.

Time for some graphs of home runs over the past 30 years:

HR pct of PA


HR pct of Hits


Sure, the leaders aren’t hitting great heights of late. Cabrera’s 44 was the highest of the last two years, whereas every full season from 1993-2007 had at least three hitters with 45+, and most of those years had at least one 50-HR slugger. The HRs aren’t flaunting themselves at the top of the heap; they’re nestled in the middle tiers. Check out the distributions from 1998-2012:

Hitters with 35 plus HRs


Hitters with 25-34 HRs


Hitters with 15-24 HRs


I’m not making the case that juicing is still rampant, though some believe it is. But whatever else the hitters are doing, when you add up the results — including the ever-growing strikeout rate — it’s clear that the intent to hit HRs is broader than ever before.

Note the distribution by batting order position for 1998 and 2012. The percentage of total HRs that were hit by each spot in the middle of the order (#3-5) went down, while every other spot went up. It’s subtle, but it’s there:

Pct HRs hit by BOP


What do you think? What other factors are involved in this trend? And do you like what it’s done to the game?

50 thoughts on “Hitters still dig the long ball

  1. 1
    Ed says:

    Great post and graphs John! Dave Cameron of Fangraphs covered this phenomenon back in August with some similar but also some different analyses if you want to read more.


    However, it appears that the early August home run boom that Dave noted didn’t last. I don’t have the data calculate homeruns per 9 innings but on a per game basis August was behind June and July and essentially the same as May.

    • 5
      John Autin says:

      Ed, thanks for that link … even if it does make me look a plagiarist! 🙂

      By the way, I’m working on some other studies — one suggests that on-base percentage is more important than batting average, while another shows that pitchers’ W-L records don’t mean a lot. I still need to color in the graphs, though.

      (At least I steal from best, eh?)

      • 6
        Ed says:

        John – You are definitely not a plagiarist! Personally I like it when two great minds tackle the same issue from slightly different perspectives. It adds depth and nuance to the discussion.

        • 7
          John Autin says:

          Ed, I was just playfully sulking. I actually thought you were exceedingly tactful *not* to raise the originality point, considering that Cameron covered most of my themes, even HRs/batted ball.

    • 44
      bstar says:

      Ed, my first thought when reading John’s piece was the one you linked to also. I’ve linked to similar articles that JA had just written in the past, so I’m glad I wasn’t the one to do it this time (not implying that you didn’t do it tactfully, Ed!).

      But the fact that JA independently reached the same conclusions as the well-respected Cameron shouldn’t surprise any of us.

  2. 2
    Jim Bouldin says:

    Now THAT is the way to ring in the new year!


    Indeed, I detest it and if there were a bold font, I would use it. Baseball was farrrrrrr more interesting back in the 70s and 80s when teams ran, there were definite offensive strategies, and at least a few managers (e.g. Dick Williams, Chuck Tanner, Whitey Herzog) had the guts to experiment with new ideas—and be successful with them. I really don’t watch too much baseball any more–not really worth the time required for the enjoyment returned. Just not all that interesting anymore.

    • 30
      Brendan Bingham says:

      Based on the following (admittedly simple-minded) analysis, hitting home runs was a more successful “strategy” than base stealing during the ‘70s and ‘80s. For the 20-year period 1970 through 1989, 12 of the 40 pennant-winning teams led their league in home runs, while only 8 pennant winners led their league in stolen bases. The ’76 Reds were a special case, leading the NL in both HR and SB (and also doubles, remarkably).

  3. 3
    Mike L says:

    Nice work. John A. Shooting from the hip, I wonder if the comparative flattening of the curve doesn’t show that a) even at the high school level, size matters, and it’s harder for smaller players and glove men to make their way through the ranks. Could you have a lot of Rich Dauers today? b) Managers becoming more creative about how they use their lineups, placing certain hitters in non-traditional power spots for their OBA, c)more of a tolerance for the strike-out and less small ball, allowing players with modest power to swing harder, d) a historical arc on types of talent. We have gone through cycles before where there were few great power hitters who set themselves apart. It’s one of the reasons the 500 HR club seemed so elite when I was growing up.
    Happy New Year to all, and glad to have a stimulating place to go to first on a cold morning.

  4. 4
    e pluribus munu says:

    John, Together with your post on 1917-18 K’s, you’ve bridged changing years with intriguing mystery issues. I spent a lot of year-end time on the earlier one, without discovering anything worth contributing. I don’t suppose I can do much on this one on a foggyheaded New Year’s morning.

    But it does seem to me that we’ll need another year or two to see whether the 2012 HR figures are meaningful as a trend, or whether they are, as your initial two charts suggest, part of a discernible rhythm of trend/outlier years, as HR rates decline from their PED peak. I’m not sure I see clear trends in your middle three charts either – perhaps a longer time frame (1969-1997?) would allow a better sense of baseline.

    But my guess is that if we were to chart HRs from 1920 on, we’d find some sort of rhythm of HR “compression” and (hmmmm . . .) “rarifaction” (?), which raises really interesting questions about the dynamics of the game. I’m not convinced that a single year-on-year contrast like your final graph gives us enough to go on, but the issue it raises is really interesting.

    To answer your last question, and taking into account Ed’s comment and link, when it comes to how well high HR and K rates suit my own preferences in baseball, I much prefer lower rates, with fewer BB and more singles and SB attempts. But I also like to see some outlier performers with 50+ HRs and pitchers with 300 Ks. For me, that’s the optimal combination of on-field and statistical interest. (Has there ever been such a year? Maybe 1965 NL, sort of?)

  5. 8
    John Autin says:

    Something I noticed about 1968: While the .2367 batting average is the lowest in MLB history, that’s not necessarily the *main* drag on scoring.

    Compare the 2-year changes from 1966 to ’68:
    – BA, -5%
    – HR/Hit, -23%
    – XBH/Hit, -10%

    And almost all of the drop in BA came from batted balls, not strikeouts. The K rate from ’66 to ’68 went up just a hair, but the BA on contact fell 4%.

    A separate matter: We all know that the .296 BA recorded in 1930 is the highest in modern history. But note the BA on contact:
    – 1930, .326
    – 2012, .327
    The years 1993-2012 comprise 20 of the top 21 modern marks for BA on contact.

    • 9
      Hartvig says:

      John- Your last sentence made me think of something.

      It’s indisputable that for the first 40 or so seasons of the game that fielding improved almost every year at a fairly marked level and somewhat evident that that trend continued at a slower pace into the 20’s & 30s’ and probably even to some small extent for the next 40 years after. I do think however that in the 70’s and 80’s equipment changes (larger gloves, better spikes among others) may have led to another increase in the improvement of fielding.

      If what I believe is true I wonder if there’s evidence that fielders were getting to more balls in play during that time and if the swinging for the fences trend might at least in part be an adjustment to that fact.

    • 10
      no statistician but says:

      It seems to me, no statistician, that your last “separate matter” ought to put to rest the notion that there is no difference between a SO and a BIP out:

      1930: 5.55 R/G; 3.10 BB/G; .434 SLG; 7.90 OBP; 3.21 SO/G
      2012: 4.32 R/G; 3.03 BB/G; .405 SLG; .724 OBP; 7.50 SO/G

      With the BA on contact virtually the same, more than a run per game resulted from hitting the ball rather than whiffing, and with a scant difference in BB/G, slugging and on base percentage were far higher in 1930. My math is rusty, but it seems like, if you double the strikeouts, you lose about a half run per game, at least in comparing these two seasons.

      • 11
        Jim Bouldin says:

        The idea that Ks are no worse than in-play outs is one of the dumbest ideas sabermatricians have come up with, maybe the dumbest. Yeah, it’s better than a double play, wonderful. If your slow, lumbering sluggers had the ability to hit and run or steal a bag, that wouldn’t be an issue.

        • 14
          John Autin says:

          Jim, I find three weak points in your reasoning:

          1) I think you underestimate the cost of a GDP. Here’s the 2012 winning percentage by number of GDPs in the game:
          – 0 GDP, .518
          – 1 GDP, .499
          – 2 GDP, .445
          So even though more GDPs also correlate with more baserunners, which is a good thing, their cost still shows up large in a simple tally.

          2) We have to measure the game based on who’s actually playing, not on some hypothetical sluggers who can also hit-and-run. I’ll agree that *if* Miguel Cabrera didn’t ground into 28 double plays last year, it would be easier to argue that his 41 fewer strikeouts than Mike Trout (7 GDPs) helped him create more runs. But Miggy did hit into those DPs, and the cost was significant, and that’s part of why Trout created more runs.

          3) SO vs. DP is a something of a false dichotomy. Edwin Encarnacion, no speedster, had virtually the same K rate last year as Cabrera, but had just 6 GDPs, because he hits everything in the air. Alberto Callaspo (4 SB, no triples) had a low K rate and 6 GDPs. Mark Reynolds whiffs and ton but still had 19 GDPs.

          A reduction in GDPs is just part of why the studies show little difference in the value of different kinds of outs. The bigger reason is that very few outs actually advance a baserunner.

          If you look at the teams with the highest and lowest numbers of “productive outs,” you’ll be hard pressed to find a useful trend.

          Last year’s extremes were the Bay Area teams, SF (214) and Oakland (131) — an edge of 83 productive outs for the Giants.

          The A’s (even with the DH) fanned 290 more times. SF, even with pitchers hitting, had a 31-point edge in BA and 17 points in OBP. Oakland did not have an edge in avoiding the DP; both teams tied for the lowest GDP rate (per chance).

          Yet their scoring rates were virtually the same, mainly because the A’s hit 93 more HRs.

        • 19
          Jim Bouldin says:

          John, my main point was that using the avoidance of the DP as a justifcation for the relative harmlessness of strikeouts is just lame. I could just as well argue that those strikeouts also prevent getting a single, successfully hitting and running, or forcing a fielder to make a play, which they sometimes will not.

          As for DPs, I have some interesting results from a simulation model for offensive performance that I’ve built and have been experimenting with. I’m finding, surprisingly, that scoring isn’t really all that sensitive to DP numbers, regardless of whether it’s a slugging team or a small ball team. Seems counter-intuitive but I’m pretty sure the model’s fine. Each type of team, with a six fold increase in DP numbers from 2% of in-play outs, to 12%, experiences a drop of only 0.12 runs per game. And that increase is for everyone in the lineup–if applied to just one hitter the effect would be almost nil. These are very high octane offenses though, scoring about 7 runs a game.

          On measuring the game by who’s actually playing, I think the point there is that we’ve selected for a type of player who doesn’t have the full box of tools (not e even close actually)–that’s what nsb’s comment shows. Back in the early 30s those guys **scored runs**, and not because of huge homer numbers but because they combined good homer numbers with *much* better other hitting skills (BA and lack of strikeouts, and hence, OBP). Compare this year’s Yankee team to those of the early 30s and this point is absolutely clear.

          • 23
            John Autin says:

            Jim — Talking about strikeouts preventing singles is changing the terms of the debate. You started this line knocking “the idea that Ks are no worse than in-play outs.”

            No one disputes that today’s high-K batting approach reduces singles. The debate is whether that cost is offset by the added extra-base hits.

            We disagree on the cost of GDPs. Is there a reason you’re running a model that generates 7 runs per team-game? I have nothing against 1894 baseball per se, I’m just curious.

            FWIW, I can’t find a saber consensus on the average value of a GIDP, but it’s generally placed around -0.35 runs.

          • 24
            John Autin says:

            Another attempt to estimate the cost of GDPs:

            Since GDPs are partly a function of opportunities, I wanted to control for that in a P-I search, but that can’t be done.

            Instead, I controlled for times on base. The average 2012 team-game had 12 times on base (hits + walks + HBP, not counting errors). So I searched for games with 11 to 13 times on base. I extended the search over 5 years, 2008-12.

            Here are the results, broken out by number of GDPs:
            – 0 GDP, .561 W%, 4.16 RBI/G (2941 games)
            – 1 GDP, .479 W%, 3.76 RBI/G (2388 games)
            – 2 GDP, .415 W%, 3.31 RBI/G (917 games)

            We’d rather see Runs than RBI, but the P-I will total only RBI, and I don’t think the Run proportions would be much different.

          • 25
            Jim Bouldin says:

            John, long reply to birtelcom got lost in the ether. You’re right I did confuse the issue.

            There are two issues. One is that arguing that strikeouts are relatively harmless ignores the fact, that while not too much worse that other outs, they’re still outs. That was the main point I was getting at. In retrospect, it’s not the sabermatricians themselves who make that mistake though, it’s people interpreting them and being careless or over-generalizing with the idea.

            However, there are in fact a couple of real issues here w.r.t. in-play outs and determining their relative importance. Arguing that Ks are little worse than other types of outs, after the fact, is not really legitimate if runs scored is the dependent variable in such analyses. This is because strikeouts affect balls in play (inversely) and therefore affect hits achieved and errors made, which directly affect the dependent variable. Those hits and errors–direct results of not striking out–will thus be “hidden” from the analysis. If you don’t make allowance for that, your estimates and conclusions will be in error. It’s a kind of post-hoc reasoning.

            Secondly, there is the big and under-appreciated issue that analyses of empirical data are inherently limited to the events that actually occurred. If there were relatively few events which one knows or has reason to suspect, apriori, are definitely “better” outs than are strikeouts–sacrifice bunts for example, sucicide squeezes to take it to the extreme–then the analysis won’t have much to say about them. This is why you have to do simulations on certain kinds of questions–because then you can institute the events you want and see what happens.

          • 27
            Jim Bouldin says:

            John the reason for the high scoring is that the first question I wanted to address was whether a high octane “speed” team would beat a high octane “power” team. So I parameterized it with numbers from five to ten of the best speed and power guys from the last 30 years, and every guy in each lineup has those numbers. I then varied things like the frequency of stolen base attempts, taking the extra base, wild pitches etc.

          • 29
            John Autin says:

            Jim @25 — OK, you make a good case for the value of modeling. But I still wonder about the present-day applicability of simulations that are scoring 7 runs per game. Can you tweak it down to historic norms?

            And while I do see the value of modeling so as to simulate possibilities that we cannot find in real baseball (e.g., more hit-and-runs), we can never be quite sure that our models are right. There will always be some value in the empirical data.

            One major change that we’ve seen over the last 30 years is the rise of short-term relief outings, so that batters are much more likely to be facing a fresh arm. We would expect that to increase strikeouts and lower batting averages and scoring, even with no change in batters’ approach.

            Further, we believe that batters *have* changed their approach, further increasing the strikeout rate, which is about 50% higher than 30 years ago.

            And yet … batting averages are down just slightly, and scoring is up slightly:
            – BA was .2605 for 1980-82, .2557 for 2010-12.
            – R/G was 4.20 for 1980-82, 4.33 for 2010-12.

            Given that, would you not start from the presumption that the changes in batters’ approach are at least somewhat efficient, or at least, *not* counterproductive?

          • 32
            Jim Bouldin says:

            John–Yes I think your last P is a reasonable conclusion. However, I’m not convinced that the offensive approaches actually taken were the best that *could* have been taken. I’d tend to argue that the fresher the arm, the more likely that small ball will be the optimal approach, but that’s just conjecture on my part–it’s not something I can explore with the model.

            Those DP numbers are interesting, because I’m not getting anything like that, so it’s cause for some investigation for sure. I parameterized my model using the “DP%” variable from the “Situational Hitting” tables at BR.com for the player sets I mentioned–a stat based on exactly the DP condition I use in the model (runner on first, <2 outs, only). For sluggers and speedsters, the values were ~ 10 and 8 percent respectively. I did find at least one discrepancy, in Pujols' data, between the given value and the value I computed from raw data as a check, but it was inconsequential.

            And, yes I can easily make the model produce more realistic run numbers, using values from the average player instead of the right tail. I'll be doing a lot more evaluations, but I'm pretty sure that should make little difference to the DP results.

            By the way, the 32 (or was it 30?) Yankees actually came very close to 7 runs a game.

          • 38
            Ed says:

            Fascinating discussion John and Jim. I wonder though if these things matter as much as we think they do. Check out, for example, the 2009-2011 Arizona Diamondbacks. Here are there slash lines, GIPS, and Ks for each year:

            2009: 253/324/418 93 GIDP, 1298 Ks
            2010: 250/325/416 113 GIDP, 1529 Ks
            2011: 250/322/413 82 GIDP, 1249 Ks

            Looking at those numbers, you would expect that the 2010 team scored a lot fewer runs than the other two teams. Same slash lines but 20+ more GIDP and 200+ more Ks. And yet, the reality is that all three teams scored about the same runs per game:

            2009: 4.44
            2010: 4.40
            2011: 4.51

            So yes, the 2010 team did score fewer runs than the other two. But were talking about a difference of 18 runs scored between the 2010 and 2011 teams. We could add the 2012 team in there and see basically the same thing.

            Obviously we can’t draw any conclusions from such a small sample size. But I still find it fascinating that high striketout, high DP team scored about the same number of runs as the low strikeout, low DP teams.

          • 46
            Jim Bouldin says:

            John at 29, 3rd P:

            I think that’s right on the money. I’ve just analyzed the runs scored for all games, 1910 to 2012. I’m finding definite, decreasing linear trends over that time in runs scored in each of innings 7,8 and 9, relative to the mean of innings 1-6. Interestingly, these trends are stronger in the 7/8th innings than in the 9th. Also interesting–no big change at any given time point (outside of the random variation)–just a steadily declining ramp.

          • 47
            Jim Bouldin says:

            Scratch that.

            Had the sign reversed: the opposite is in fact true–there’a been a steady *increase* in runs scored in the last 3 innings relative to the first six, over the last 100 years. Completely unexpected result.

          • 48
            Jim Bouldin says:

            Scratch x 2, was right the first time, except that the 7th inning shows little trend over time. The most prominent finding however is not so much the trends themselves as the year to year variation, which shows large decreases over the last 3 decades or so.

        • 22
          birtelcom says:

          “The idea that Ks are no worse than in-play outs is one of the dumbest ideas sabermatricians have come up with, maybe the dumbest”

          Jim, I’m not quite sure what you mean by “dumbest idea” here. Do you mean you think it is factually inaccurate? Or do you mean that even if it is factually accurate, it is a bad idea to be circulating because it helps promote what you (and I too) view as a less appealing form of baseball?

          The observation that Ks, on average, reduce run scoring by very little more than non-K outs (all other things being equal) is not itself a theory, but simply a factual observation. The difference is not zero, but it is very small. In each game situation (outs, men on base), we can figure the average number of runs that score in the rest of the inning. We can then look at each type of event (single, double, triple, HR, BB, K, non-K out, etc) occurring in each game situation and see how many runs on average score as a result of that event and the rest of the inning. Running these numbers for every major league PA produces a huge dataset. One can then compare how each different event actually effects run scoring on average. Tom Ruane of Retrosheet a few years ago ran these numbers for many seasons (http://www.retrosheet.org/Research/RuaneT/valueadd_art.htm), and others have run them, too. The numbers clearly show that the difference in run reduction, on average, between Ks and non-K outs is quite small, though more than zero (on average, 100 Ks over a season will cost, all other things being equal, a team just a handful of runs more over a season than 100 non-K outs).

          Note that this is very different than comparing Ks with balls in play. Balls in play (all other things being equal) are certainly way more valuable than Ks because some balls in play fall in for hits. But that’s a different matter than the effect of Ks and non-K outs.

          • 28
            Jim Bouldin says:

            birtelcom, reply got eaten, but I re-stated most of it in reply to John above. Thanks for the link to Ruane’s work.

    • 12
      Doug says:

      “The years 1993-2012 comprise 20 of the top 21 modern marks for BA on contact.”

      I think this speaks to the result of increasing HRs and strikeouts, especially HRs increasing throughout the lineup. With players swinging harder, rather than just trying to make contact, they will make more solid contact when they don’t swing and miss. Thus, a higher BABIP.

      • 16
        John Autin says:

        “With players swinging harder, rather than just trying to make contact, they will make more solid contact when they don’t swing and miss.”

        I think one aspect is missing from that analysis. It seems to me that the cost of swinging harder is not only more misses, but also less frequent squaring up when contact is made.

        For sure, a harder swing plus square contact should make for more hits from contact. But I don’t think we can think through the overall effect of harder swings without some idea of not just contact rates, but solid contact rates.

        • 21
          Doug says:

          Hard to know how to infer solid vs. weak contact from available data. One possibility may be to look at BA on contact with two strikes. There the difference in BA on contact (if any), when swinging hard or not, should be most evident.

          Notionally, players in former years would cut down on their swings after reaching two strikes, to reduce chances of striking out. Now, the reasoning is more along the lines of “you’re probably going to make an out anyway, so there’s little to lose (and much to gain) by continuing to swing hard”.

      • 17
        kds says:

        Not a higher BABIP. BABIP does not include HR, but BA on contact does. So a higher HR rate with the same BACON means a lower BABIP. 1930 BABIP, .312; 2012, .297. More balls going over the fence, and more of those that don’t go out are turned into outs. (Some of this may be general improvement in defense.)

      • 37

        I think there’s another element missing here. BABiP is subject to official scorers’ verdicts on what constitutes a hit. If scorers are more reluctant to charge hometown guys with errors, BABiP goes up regardless of the type of contact. If we add Reached on Errors to BABiP, is Doug’s italicized statement above still true?

        • 40
          John Autin says:

          Bryan — Re: the impact of changing standards in official scoring:

          My rough comparison for the 10-year periods 1951-60 and 2003-12 suggests a ceiling of about 1.7% increase in hits.

          My method:

          1) Find the total of (Hits + Reached On Error).

          2) For 1951-60, compute the percentage of that figure that is accounted for by Hits. In other words, find H/(H+ROE).

          3) Apply that figure to the total of H+ROE for 2003-2012. That is the estimated Hits for 2003-12 if the same scoring standards had been applied.

          4) Express the actual Hits for 2003-12 as a percentage of the estimated Hits.

          It comes out to a Hits increase of about 1.7%.

          Now, that’s assuming that the only difference in the (H+ROE) for the two periods is a change in scoring standards, which is obviously unreasonable. We know that both field conditions and equipment have improved. I’d guess that the “real” impact of the change in scorers’ standards is no more than a 1.0% increase in hits.

          Let me know if I screwed that up. 🙂

          • 42

            Sounds right to me, John, and it seems pretty significant. 1.7% is about a 5-point increase in BABiP, right (BABiP hovers around .300; multiply by 1.017 and we get .3051)? Let’s say 2-3 of those 5 points come from changes in scorers’ standards. I don’t know where to find league BABiP over time, but league batting averages were 4-14 points higher during the “steroid era” than they were right after the DH was instituted (and even closer to averages in the ’50s) and I’d expect BA to fluctuate more than BABiP.

            It’s possible that scorers’ standards represent less than half of the increase in hits and this is all moot, but I think there’s a real chance that they’re a major driver of any change in league BABiP. That, and errors probably shouldn’t be treated differently from hits on a hitter’s record, but that’s a separate conversation.

          • 43
            Doug says:

            There is also some dependency between official scoring standards and the two other mentioned factors of field conditions and equipment. The effect of the latter on fielders’ performance no doubt influence official scoring standards by changing the expectations of the scorer.

            In fact, it may be that scoring standards are as they have always been – an error is charged when fielders fail to meet the scorers’ expectations of how well a fielding chance was handled. As to home-town bias, is there reason to believe that is more or less prevalent now than in the past? (I don’t know – just asking the question).

    • 45
      DaveKingman says:

      Does the change in ballparks have anything to do with this? I don’t know how to research this, but my understanding was that in the “Old Days” there were a lot more triples, foul territory to catch popups, etc.

      And that in today’s ballparks, a batted ball is either a homerun, in play or a foul ball.

      I’m grossly simplifying, of course. But is there any good way to analyze this?

  6. 13
    Doug says:

    On the first chart (HRs as % of PAs), appears we are still on the downward trend from the 2000 high. Since 2002, it’s been cycles of one or two down years and one up year. In each cycle, both the up and down years have been lower than in the preceding cycle.

    • 15
      John Autin says:

      Doug — Agreed, but then you’re comparing to the all-time HR peak.

      I don’t know what the accepted definition of the PED years is, but let’s define the peak HR era as from 1994 (the 2nd straight year of big increase) through 2006 (the last year over 2.7%). The average HR% of that period is 2.78%.

      If we’re talking about how times have changed, I think it’s more reasonable to measure against that 2.78% figure than against the 2.99% of year 2000.

      • 20
        Doug says:

        There were also up and down years on the ramp up to 2000. 1998 was a down year, whereas 2012, with the same HR per PA, is in an up year. Another reason to suspect the current cycle will continue with HRs per PA dropping further.

        Another noteworthy point about the first two charts are the years 1985, 1986 and 1987. The prevailing wisdom is that 1987 was a one-year fluke with a juiced ball or some such anomaly. Yet, your charts suggest it was a culmination of a 3-year run-up in HR rates, a run-up stopped dead in its tracks in 1988.

        • 26
          John Autin says:

          “There were also up and down years on the ramp up to 2000. 1998 was a down year…”

          Doug, that’s true, and perhaps I took some license in choosing the narrative-rich 1998 as the point of comparison. It’s still true that the 1998 HR rate (2.69% of PAs) is much closer to the 1994-2006 average (2.78%) than is the year 2000 (2.99%).

          “…whereas 2012, with the same HR per PA, is in an up year. Another reason to suspect the current cycle will continue with HRs per PA dropping further.”

          2012 *is* an up year, and expecting a decline is good statistical practice. But I still feel great uncertainty about that, mainly because of the unrelenting rise in strikeouts.

          The SO% jumped 6.2% in 2012 over 2011, the 9th-largest increase since 1920, and larger than the 3-year rise from 2008 to 2011. Is it just coincidence that the HR% also spiked last year?

          I know there are factors in the SO% besides the hitters’ approach. But I’m not quite ready to call 2012 an anomalous HR%.

  7. 33
    John Autin says:

    Jim @32 — Yes, the 1930 and ’31 Yankees averaged 6.9 R/G (rounding to one decimal). But I’m not sure what those teams can teach us about strategy, beyond “it’s good to have two of the four best hitters ever.” (OK, I’m being facetious.)

    Those leagues averaged 5.4 and 5.1 R/G, BTW.

    • 34
      Jim Bouldin says:

      They teach us–or at least Lou Gehrig does–that it’s a good idea not to strike out a lot! Pay attention Granderson.

    • 39
      Mike L says:

      Actually, John A., you aren’t being facetious. It does help to have two of the greatest hitters of all time. If you think about this entire line of argument regarding strike outs, BABIP, HR’s, etc., it’s predicated to an extent on the average to good hitter. Superior hitters with superior strength and hand to eye coordination may simply square up better and make better contact. Since not everyone can be a Ted Williams, they have to compensate by sacrificing bat control for what they think is bat speed. Also, couple that with the apparent increase in the number of pitchers who can simply throw very hard, and the batter has to commit earlier, making it more difficult to have the bat in the perfect plane for the ball.

    • 41
      Lawrence Azrin says:

      It also helps a lot to have other excellent hitters, such as Bill Dickey, Tony Lazzeri, Earl Combs, and Ben Chapman. Even several of their pitchers, such as Red Ruffing (who had a higher OPS+ in 1930 than everyone except Ruth/Gehrig),had good hitting seasons.

  8. 49
    birtelcom says:

    I’ll be entirely speculative for a moment — the following hypothesis is wholly un-testable I think.

    I wonder if free agency itself has added an incentive for players to move further toward power-based rather than contact-based skills. Hitters and pitchers who emphasize contact as part of their game tend to be more valuable to a select group of teams: contact pitchers tend to need good fielders behind them and contact hitters need a strong lineup around them to promote ther singles into runs. Strikeout pitchers and home run hitters, in contrast, may have more completely portable skills: every team can effectively use a strikeout pitcher or a home run hitter. Increased portability of skills means more competition for a free agent’s services and increased value in the marketplace. Over time, perhaps this dynamic moves the game at the major league level increasingly towards a power, rather than contact, orientation. Just a theory.

  9. 50

    […] the invaluable High Heat Stats, a chart of home runs as a proportion of plate appearances over time.  Home run hitting increased […]

Leave a Reply

Your email address will not be published. Required fields are marked *