As long as we've been discussing stolen bases, let's glance at the 2007 leaders.
A total of 457 bases were stolen in the entire season. Of them, 314 were taken by the 20 leaders in this chart:
Now let's look at them again, this time emphasizing pure base stealing ability. The next chart is sorted in order of stolen bases per times on base; that is, SB / (H + BB + HBP - HR). It indicates how often a player stole a base given the number of opportunities he had to do so. (The cutoff point for the chart was 80 plate appearances.)
Mike Lyons of Bet Shemesh not only led the league with 32 steals, leaving Netanya's Josh Doane in the dust with 25, but he was in a class of his own in terms of basestealing frequency, stealing on average over 71% of his times on base. That's 46% more often than #2 John Toussas of Raanana, and over four times as high as the IBL league average of 16.5% (compare that with about 5% in the major leagues!). And Lyons was caught just 4 times, in 11% of his attempts.
I don't have any earthshattering conclusions here. Just the stats, ma'am.
Wednesday, October 31, 2007
As long as we've been discussing stolen bases, let's glance at the 2007 leaders.
Tuesday, October 30, 2007
As long as I was discussing Eladio Rodriguez and Jason Rees, I should have pointed out that they were also apparently effective fielders. I say apparently because the only fielding stat I currently have available is errors, and those are known to be only a partial reflection of fielding skill. Rees was charged with two errors over the season in this error-prone league, while Rodriguez, a catcher, was charged with one error and two passed balls. Since catcher is the most challenging defensive position after pitcher, that may give Eladio an edge in his career; catchers aren't expected to be good hitters too.
Also noteworthy is that neither player stood out for stealing bases. Rees was successful enough, stealing 14 (and caught twice), putting him in a four-way tie for ninth place and placing him eleventh for his rate of bases stolen per times on base.
Rodriguez, however, stole just one base the entire season.
So, IBL fans, remember where you stashed those autographed game balls, caps, programs, tickets and Burgers Bar hamburger wrappers. Maybe they'll be worth something some day. Maybe.
Sunday, October 28, 2007
While the Red Sox were clobbering the Rockies the other day, IBL fans were treated to a smile when the Yankees announced the signing of IBL batting stars Jason Rees and Eladio Rodriguez, the first IBL position players to sign pro contracts and the first IBL alumni to sign with MLB farm systems.
Congratulations to Jason and Eladio, as well as to the league for giving them the platform from which to get noticed by the big leagues.
But don't expect to see Rees and Rodriguez in Yankees uniforms any time soon. They were signed to minor league contracts, and both have a long way to go before they're likely to make the majors.
Eladio has minor league experience already, having been signed by the Boston Red Sox in 1998. His (incomplete) record in the minors shows him having played in leagues at the A- and A+ levels, playing variously as catcher, outfielder and pitcher. most recently in 2004. Before the IBL, he played in the Dominican Republic Winter League, where he apparently only had seven at bats in as many games in the 2006-7 season. At 28 years old, typically the peak of a ballplayer's career, he would seem to be a long shot for a major league roster.
Update: Eladio's complete minor-league record is here.
Rees's baseball resume is even thinner. He's played college ball, and not even at the higher levels. And he's played in Australia. None of which is to say that he doesn't have what it takes to make it in the big leagues, but neither is it much evidence that he does. He is only 23, however, and his best years may yet be ahead.
So why Rees and Rodriguez? Most obviously, the two were the IBL's home run leaders. Rees led the league with 17, followed closely by Rodriguez at 16. But Rodriguez was injured for about a week of the season, and he actually hit home runs at a faster pace than Rees when he played: 6.4 at bats per home run, compared to 7.6 for Rees.
Beyond that, how did the two do in the IBL? Let's see how they ranked among the 50 batters with at least 80 plate appearances (that's about 2 per game).
As you can see, despite Rees's higher home run total, Rodriguez was a substantially better hitter over his 34 games than Rees over his 41. Rees also walked less often (9.7% of plate appearances vs. 13.6% for Rodriguez) and struck out more (15.3% of PAs vs. 12.7%). And there were several other batters with better overall stats than Rees.
Considering the short season and the small sample size (remember, about one-fifth the length of a major league season), it's hard to see why the Yankees would sign Rees just on the strength of his home runs rather than, say, Gregg Raymundo (12 HRs), Johnny Lopez (14 HRs) or Adalberto Paulino (11 HRs in just 92 PAs). Though of course I'm not privy to any of their personal plans, and perhaps none of them were available or the Yankees turned them down for other reasons.
On the face of it, it looks like the Yanks just signed the IBL's two home run leaders without thinking any further. If that's the case, this may turn out to be more a PR move than anything else in a city with America's largest Jewish population.
Still, it will be good to see how the two hold up in the minors. We'll have two more data points for assessing the IBL's league quality.
Wednesday, October 17, 2007
First, credit where credit is due
I don't want anyone to get the idea that using error rates to assess league quality is a new idea. In fact, Bill James himself identified fielding percentage as an indicator of league quality. The error rate and the fielding percentage are just two ways of looking at the same information.
Historical charts of major league stats are available at A Graphical History of Baseball (hat tip: Baseball Musings).
Here's the chart of Errors Per Game (per team):
By this standard alone, the IBL would match the early 1900's with 2.2 errors per nine innings. (If it's any consolation, some of the 2007 rookie leagues are in the same zone.)
Stolen base rates, however, do not track the long-term improvements in league quality. Steals fell to their lowest levels around 1950, then rose until the 1980s, and have declined since then:
I don't know what changes in the game gave rise to these trends in the steal rate - perhaps shifts in runner skill versus pitcher skill? Or maybe it was all Rickey Henderson's fault!
(Update: Duh! Of course, a major factor in the number of steals per game is the overall level of offense - the more baserunners, the more steal opportunities. That's why the relevant steal rate is steals per runner on base, not steals per game.)
It certainly remains possible that the steal rate correlates with league quality at any particular point in time, as my earlier graphs seem to demonstrate. But the steal rate would seem to be a far less reliable gauge of league quality than the error rate, so I'm less inclined to downgrade my assessment of the IBL's quality on the sole basis of its high steal rate. (The IBL's steal rate was 2.5 per nine innings, nearly double the MLB's record levels from the early 1900s!)
Tuesday, October 16, 2007
The previous post on estimating the level of play in the IBL generated some interesting comments, including on the Baseball Fever Sabermetrics Forum and Tom Tango's blog. Also, Rabbi Jason Miller noticed my citation of his game observations, and commented.
I'd like to respond to the comments, and add some more observations of my own.
Why errors and steals?
Tango is surprised that error rates and stolen base rates correlate at all with the level of the league. After all, the reason batting averages, or walk and strikeout rates, don't track the league level is that they are the result of the confrontation between the batter and pitcher/fielders. Better leagues have better hitters, but also better pitchers and fielders. On the whole, they balance each other out, so the majors don't have higher batting averages or walk rates than weaker leagues. Sometimes pitching overpowers hitting or vice versa, but there's no connection between the relative strength of hitters and fielders and the overall level of league play.
You might expect the same to apply to errors and stolen bases. An error is not just the fault of the fielder. Some batters consistently reach base on error far more often than other batters, presumably because they're hitting more hard-to-field balls. Shouldn't that balance out the stronger fielding in the stronger leagues?
A stolen base certainly is not the sole fault of the fielding team; arguably, it's first of all a skill of the baserunner. So why should weaker leagues have higher steal rates? Don't they have less skilled runners?
On the one hand, the graphs speak for themselves. The correlations between league level and error rates per at-bat (0.93) and stolen base rates per runner on base (0.85) are stunningly strong. If you leave out the inconsistent rookie leagues, they're even higher (0.97 and 0.88 respectively). But that doesn't absolve us of an explanation.
The answer, I think, is that the league-level variations we see in both error rate and steal rate are primarily factors of the quality of the fielding. It may be true that some hitters are better able to hit balls that are hard to field, but at lower levels of play that's not the main factor in producing errors. To quote myself:
What I think you're seeing with the top major leaguers is an ability of exceptional batters not just to "hit it where they ain't", but also to "hit it where it's hard to field". What I think we're seeing with high overall league error rates in the minors is at the opposite end of the defensive ability scale - not balls hit where it's hard to play them, but routine plays that the sub-major-leaguers flub: dropped catches, wild throws, bobbled grounders.
That is, I suspect that the further you go down the ability ladder, the more errors reflect unprofessional fielding rather than skillful batting. Hence, overall higher error rates in overall weaker leagues.
A similar argument can be made regarding steals. While running speed is important in baseball, it's not necessarily that much higher in the majors than in weaker leagues. What is substantially higher is fielding ability, as a result of more experience and winnowing out the poor fielders. Plenty of minor league players can run as fast as their major league counterparts, but they aren't as practiced at holding runners on base and picking them off at second.
The upshot of this analysis is that both of these measures are, at least at league level, essentially indicators of fielding ability. We still have no independent measures of league level based on batting ability or pitching ability. The assessment is very one-dimensional. Unfortunately, stats such as wild pitches or hit batters do not seem to be available for the minor leagues; they could be good indexes of pitcher skill.
More about the stats and graphs
Tango is probably right in suggesting that I had the denominators wrong - errors should be measured per at-bat, and steals per runner on base. In practice, though, those changes don't affect the results in any significant way.
On reflection, I would drop the "unearned runs" and "defense efficiency" measures. The former is just a roundabout and unreliable way of measuring the error rate - it might be useful if you don't have error stats, but it's generally better to measure errors directly. The latter measures the defense's success in putting out batters on balls in play. However, the correlation between batting average on balls in play (BABIP = (H - HR) / (AB - HR - SO)) and league level is very weak (see below). In practice, then, the DER graph is also just another way of measuring the error rate. That leaves us with two relevant stats: errors per at bat and stolen bases per runner on base.
We can plot them against each other for another picture of the league quality level (click to enlarge):
In this graph, I've indicated the league level by the plot symbol: blue spheres for the majors, green spheres for AAA, gray spheres for AA, red spheres for A+, gold spheres for A, gray diamonds for A-, orange spheres for rookie leagues. Three independent leagues have been marked with stars: the Atlantic League (red), Canada's Intercounty Baseball League (orange), and the Israel Baseball League (blue). The regression line is based only on the majors and ranked minor leagues, including the rookie leagues but excluding the independents.
With the exception of the steal-frenzied IBL, the relationship between the steal rate and error rate is clear and strong (0.92 for the ranked leagues). Also, the grouping of leagues by level is mostly distinct. AAA and AA seem quite close in level here - maybe fielding levels aren't different enough to distinguish between them. Note that the Atlantic League falls in the AA-AAA area, as both the league and observers generally claim. A and A- leagues are quite close, but A+ is clearly at a rank of its own. And the rookie leagues show a wide range of levels, but they cluster quite close to the SB/E regression line (with the Canadian IBL somewhere in the middle).
Arguably, the distance along this line could be used as an estimate of league quality, at least as indicated by fielding ability. I'll try to calculate those estimates, time permitting.
Without further ado, here's the graph of BABIP I promised. There's a correlation between BABIP and league level, but it's weak (0.33), and not much value in assessing league quality.
A final comment on the stats. Sabermetricians have often derided the error stats and fielding percentage, not without good reason: "Errors and therefore fielding percentage are an inadequate way of measuring fielders because of the subjective nature of the decisions and because they only record failures and thus fail to take into account the fact that good fielders cover more ground and therefore record more outs" - Dan Agonistes.
But in the aggregate, I think I've shown that errors are a relevant measure of league quality level, and one of the few such measures that are widely gathered and published for baseball leagues of all levels of play. Keep that in mind next time someone touts his new top-secret formula for assessing fielding ability or league quality.
And now back to the rabbi.
Rabbi Miller defers to the judgment of Jay Sokol, who attended the IBL game with him:
Jay is the General Manager for the Delaware Cows of the Great Lakes League, which is a summer league dedicated to helping college players get used to the wooden bats they'll use in the minor leagues. Jay thought the level of play in the IBL was very similar to the wood bat summer league. He even recognized an IBL player whom he previously scouted for the Cows.
I certainly defer to Sokol's baseball judgment - I'm just a fan and a novice sabermetrician. I would point out, though, that the game they watched was between Netanya and Raanana, two of the IBL's weaker teams (at least until Netanya's closing weeks). The game's box score and play-by-play log indicate that Raanana committed five errors - high even by their own averages (2.1 errors per game, the highest in the IBL). So I wouldn't rely on a single game to assess the IBL's level of play. But thanks for the input!
Thursday, October 11, 2007
I'm not done yet with run estimation; I'm doing some work on the Linear Weights method. But I'd like to take a break to examine a different question, one which was on the minds of many fans last summer: What level of baseball did the IBL play?
It was clearly far from major-league standards, but did it reach minor league levels? If so, which level of the minors - AAA (the highest)? AA? Single-A? Rookie ball?
There are a few ways to go about answering the question. We can:
1. Look at people's subjective impressions.
2. Look at where the IBL players were recruited from.
3. Compare the performance of IBL players with other leagues they played in before or after the IBL.
4. Identify statistics which vary based on the level of play in a baseball league, and see how the IBL measured up.
I'd like to try all of these.
1. Subjective impressions
- With the quality of players we expect to attract, we are going to be able to provide a high-caliber level of play, probably most akin to Rookie League/Class A ball in the U.S.
- It's a little higher level than I'm used to in college.
- The quality of play sometimes approached major league standards, while occasionally sinking to a high school level.
- The level of play was somewhere between college ball and AA minor league.
-- The IBL, in advance of the season opening, expected to match the lowest levels of the minor leagues.
-- IBL pitcher Aryeh Rosenbaum describing his first two weeks of the season.
-- IBL pitcher Travis Zier writing after the season.
-- Rabbi Jason Miller, after attending a game.
There's something of a consensus: better than college ball, somewhere around the lowest ranks of the minor leagues.
2. Where the players came from
I don't have a complete breakdown, but it's clear that many of the players had only played college ball before coming to the IBL. Others had played in the lower ranks of the minor leagues, and some had played in independent leagues in the U.S., Europe or elsewhere. This is consistent with the assessment by method 1.
3. Comparing IBL player performance with their play in other leagues
I'm working on this, but it will take some time to gather and organize the data.
4. Compare the IBL with other leagues in terms of statistics which distinguish level of play
See, for example, the suggestion of "SABR Matt" at the beginning of this Baseball Fever posting:
Think about what kinds of events happen in the weakest of leagues and look for them in any league to measure its' quality relative to the strongest of leagues.
Bill James wrote down in one of his abstracts something like a dozen different kinds of things that happen a lot in bad baseball leagues and rarely in good ones. That list included Errors, "rare events" (like triple plays, baserunning outs, mistakes of aggression, base hits on pop-ups (why do you think we call those Texas Leaguers?) etc), passed balls, wild pitches, hit batsmen etc.
The problem here is that it's hard to find statistics for most of these "rare events". Baseball Reference, for example, a tremendous repository of baseball statistics, doesn't report HBPs, passed balls or wild pitches for the minor leagues. And neither do the websites of the leagues themselves.
About the only statistics I could find which fit this description are related to errors and stolen bases. It seems obvious that there should be more errors in weaker leagues, since the fielding isn't as good. For the same reason, presumably, there are more steals - the defense isn't as good at catching them.
(Note that you can't use batting-related data to distinguish level of play. A harder league has both better batters and better pitchers, so there's no relationship between, say, batting average and level of play.)
I collected data for the 2007 season of both leagues of the MLB and all the minor leagues listed on Baseball Reference and computed the following stats: Stolen bases per nine innings, Errors per nine innings, Unearned Run Average (which is like the ERA, but for unearned runs), and Defense Efficiency Ratio, a measure of defensive play which also takes errors into account.
Then I assigned each league a "level of play" rating: 1 for Rookie, 2 for A/A-, 3 for A+, 4 for AA, 5 for AAA and 6 for MLB. (I combined A and A- based on the preliminary results, which indicated they were too similar in level to distinguish between them.)
And now, the graphs. The horizontal axis represents the league level rating, and the vertical axis is the statistic in question. The IBL is marked by a large blue star.
The IBL seems to fall somewhere in the Rookie Ball spectrum, though obviously that's a rather broad spectrum. The level of play seems much more closely correlated with the league ranking for the post-rookie leagues than for rookie ball. I was actually surprised by how nicely linear the graph is for the most part.
Out of curiosity, I added to the graph two other independent leagues with no official ranking level, because IBL players have played there either before or after the IBL. Ryan Crotin, one of the IBL's leading hitters, played several seasons in Canada's IBL - the Intercounty Baseball League - where he was also a batting leader. And two pitchers from Israel's IBL, Rafael Bergstrom and Jason Benson, were signed by the indepedent Atlantic League after the season ended in Israel.
Judging by the graphs above, the Canadian IBL also ranks as a rookie league in level of play, not far from the Israeli IBL in level of difficulty. The Atlantic League, by contrast - labeled "ATL" on the graphs - ranks at about 3.5, somewhere between A+ and AA ball. This contrasts with descriptions categorizing the Atlantic League as "between AA and AAA", but I wouldn't place too much faith in the handful of statistics presented here. There's much more that goes into quality of play than errors and stolen bases, and anyway you could move the ATL point to 4.5 without getting too far off the regression line.
(My, those IBL players just kept stealing bases! Could that be a sign that I've got the level pegged too high? I guess I need data on college ball....)
Wednesday, October 10, 2007
In my earlier analysis of IBL park factors, you may have noticed an apparent anomaly. Gezer's park factor for hits is 10% higher than Yarkon's; its factor for doubles is 30% higher, for home runs 170% higher, and for total bases 30% higher. But the factor for runs is only 5% higher. Despite being much better for batting, Gezer did not produce that much more runs than the other IBL parks. How is this possible?
Of course, Gezer also had a 14% lower factor for walks than Yarkon, and no triples, but that hardly seems an adequate explanation.
Now that I've tabulated errors, the explanation is clear. Gezer had the lowest error rate of the three parks, with Yarkon the highest:
Using the same technique as before to calculate park factors for errors, we get the following:
Does this make sense? Is it at all reasonable that ballparks should affect the error rate?
Of course it is.
First, for the most part, errors can occur only on balls in play. With Gezer's higher home run and strikeout rates, it lags the other fields in balls in play by about 5%.
Still the error rate per balls in play ranges from 7.1% at Gezer to 9.3% at Yarkon. (Wow, that's a lot, no?)
Most of that can presumably be attributed to the size of the playing field. In Yarkon's (relatively) large outfield, flubbing the throw from the outfield - or the catch itself - is more likely. In Gezer, there isn't as far to run to go after a fly ball, and there isn't as far to throw to get it to an infielder.
Calculating park effects for errors per ball in play, we get:
So yes, errors do have park factors. Or perhaps parks have error factors. Either way, in the IBL, it's significant.
Tuesday, October 9, 2007
With some help from experienced sabermetricians over at the Baseball Fever forums, I may have solved the Case of the Too Many Runs.
Sure enough, the main missing factor seems to be the errors, which were so much more frequent in the IBL than in the majors. MLB run estimators can ignore errors and get pretty good results, but that won't do for the IBL. I expected to do plenty of multiplier tweaking to get the numbers to match up, but I was pleasantly surprised.
Turns out, Tango Tiger has already worked out multipliers for the BaseRuns estimator which take errors into account, along with just about every other imaginable game event. The complete set of weights can be found on his website.
I don't have stats for every category on that list, so I just took the ones I have available.
Remember the general formula for BaseRuns:
BaseRuns = A x B / (B + C) + HR
A = number of runners on base
B = expected number of bases advanced
C = outs
HR = home runs, of course
Using Tango Tiger's weights for the data I currently have, that yields:
A = (H - HR) + BB + IBB + HBP + E + 0.08*Sac
B = .726*1B + 1.948*2B + 3.134*3B + 1.694*HR + .052*BB - .483*IBB + .163*HBP + .799*E + .727*Sac -.057*K -.004*(Other Outs) + .813*SB - 1.188*CS
C = AB - H + .92*Sac
Thankfully, the IBL does indeed obey the universal natural laws of baseball. The estimates are all accurate to within 6.2%, with the league estimate less than 1% off the actual number of runs scored. And this was achieved without any coefficient tweaking on my part - Tango's formula was left as is, except for omitting those terms for which I have no data.
It seems fair to conclude that the main difference in IBL run production compared to the MLB was the high rate of errors, followed secondarily by the higher stolen base rate.
To check the robustness of these results, I applied them to the IBL stats broken down by field. Sure enough, BaseRuns accurately predicted the runs scored in three different run scoring environments to within 2% overall (though, with the small sample sizes, estimates were off by up to 15% for some individual teams at specific venues).
I could try the same exercise with a different run estimator adjusted for errors and steals, but I'm not sure there's much point. The conclusion is clear: You can't analyze IBL run scoring without accounting for errors.
Posted by iblemetrician at 7:27 PM
Sunday, October 7, 2007
Last week, I noted that Bill James's basic Runs Created formula substantially underestimates the number of runs actually scored by IBL teams, by an average of 25%. I suggested two possible explanations for this:
1. IBL teams scored runs in ways not accounted for by the Runs Created formula, such as stolen bases, sacrifices or fielding errors.
2. The Runs Created formula only provides accurate estimates within a range of values typical of major league baseball, but it is not correctly calibrated for the IBL's level of play.
I'd like to eliminate the second hypothesis from consideration.
First, note that the range of batting stats in the IBL isn't that far out of line with the MLB. Team stats range from .234 to .294 for batting average, from .368 to .419 for on-base percentage, and from .327 to .515 for slugging average. For the MLB, the equivalent values for 2007 are .248-.288 (AVG), .317-.363 (OBP) and .385-.461 (SLG). The IBL's averages are generally higher, especially for walks and extra-base hits, but not exceptionally so. A run estimation formula which can't handle them wouldn't seem to be of much use, and it's hard to believe that the usual formulas would be out of their calibration range.
To check this, we can apply some alternative formulas for run estimation based on different statistical approaches and see whether they yield similar IBL estimates to James's formula. If so, that would indicate that we're not seeing a calibration range problem. If the IBL's figures are out of calibration range for James's Runs Created, there's no reason to believe they'd also be out of range for every other formula. And if they were, there's no reason to believe they'd yield similar erroneous results with other formulas; different formulas should respond differently to out-of-range values.
So let's take a look. I've chosen the following formulas, along with James's RC-Basic: Base Runs (BsR), ERP and XRR. The results:
As you can see, all the formulas substantially underestimate IBL run creation, and all by more or less the same amount, by some 30 runs per team on average. Clearly, hypothesis 2 is refuted. IBL teams were scoring runs in ways not captured by the conventional run estimation formulas.
What might those be? Let's look at the frequency of some game events in the 2007 MLB and IBL.
I've cherry-picked the interesting numbers from the season averages and calculated them in terms of events per 100 plate appearances, to normalize for the different game lengths. The results (click to enlarge):
IBL teams scored 33% more runs per PA than in the majors, even though - perhaps surprisingly, there isn't much difference in average rates of hits, home runs or strikeouts. So where do all those extra runs come from?
Presumably from the greater numbers of walks (52% higher), stolen bases (nearly four times as many as the MLB), hit batters and errors (over 3 times as many), wild pitches and passed balls.
The basic Runs Created formula doesn't consider stolen bases, nor does the version of Base Runs used above. ERP and XRR do, which presumably accounts for some of their improved accuracy over the other two formulas for the IBL. But none of the formulas incorporates errors, wild pitches or passed balls. It will be interesting to see if we can find a way to adjust them appropriately.
Posted by iblemetrician at 7:48 PM
Tuesday, October 2, 2007
I'm mystified as to why Tel Aviv wants to un-renovate the Sportek baseball field, which the IBL upgraded for use in last summer's league games. Who does it hurt if the fences are left in place for use by amateurs?
Beyond the utilitarian arguments, though, think of the sentimental value. The Sportek field is a historic landmark, the site of the first professional baseball game in the history of Tel Aviv. It was the site of the IBL's first forfeited game, when Petach Tikva's Ryan Crotin refused to leave the batter's box after his expulsion for arguing a called strike. It was the site of the dramatic playoff game when the underdog Modiin Miracle beat the favorite Tel Aviv Lightning after Tel Aviv's first baseman Stewart Brito was taken to the hospital with a broken nose and then, on the very next play, right fielder Jeff Hastings got his arm stuck in the outfield fence.
And who could forget the foul balls rolling into the Yarkon River, startling the odd passing jogger or Hare Krishna?
No, after 31 regular-season professional games and two playoff games, each of them attended by literally dozens of fans when they could make it through evening rush hour traffic, Sportek means way too much to the citizens of this country to let it fall into disrepair and ruin. No one would even think of demolishing Yankee Stadium, or Tiger Stadium! Does Sportek Field deserve any less respect?
What is this country coming to, when we can't rely on our municipal officials to make sensible, intelligent decisions for the good of the general public?
Monday, October 1, 2007
I've been thinking about run creation.
One of the oldest questions addressed by sabermetricians since the early days in the long-ago 1970s has been how to estimate the value of a team or player in terms of how many runs they have created. Pioneer baseball analyst Bill James devised a number of run estimation formulas, the simplest of which is impressive in both its elegance and its accuracy. It goes:
Runs Created = (Runners on base * Total bases advanced) / Opportunities
or, more precisely:
RC = ((H+BB) * TB) / (AB + BB)
Another way to write this formula is:
RC = OBP * TB
Using only statistics which are widely publicly available, this formula generally predicts the number of runs scored by a team to within 5% of its actual value. For the complete 2007 MLB season, for example, the Runs Created estimates per team range between 93.2% and 105.1% of the teams' actual runs scored, with only 4 out of 30 teams falling outside the 5% margin. The correlation coefficient between Runs and Runs Created is a striking 0.959 (perfect correlation is 1.000).
Since Runs Created accurately estimates runs scored by major league teams, and the game of baseball is the same whether played in Seattle or Gezer, presumably it should work as well for the IBL as the MLB.
Right and wrong.
On the one hand, RC still correlates highly with runs scored per team. For the IBL's six teams, the correlation coefficient between the two figures is 0.966.
On the other hand, Runs Created consistently underestimates the actual number of runs scored in the IBL, by an average of 25.2%, ranging from 15.7% for Modiin to a full 36.3% for Tel Aviv:
There are two possible explanations for this:
1. IBL teams were scoring runs in ways not accounted for by the Runs Created formula, such as stolen bases, sacrifices or fielding errors.
2. Bill James's Runs Created formula does not actually capture anything essential about the way runs are created in baseball. It just coincidentally happens to work within the range of values typical of major league baseball. In leagues outside that range, RC is not correctly calibrated to estimate actual runs.
I don't have enough data yet to adequately evaluate explanation 1, but a glance at the pitching statistics indicates that there might be something to it. MLB teams gave up an average of 777 runs this season, 717 of them scored as earned runs. That means 8.4% of runs scored were the direct or indirect result of fielding errors. (Yes, I know the error stats are unreliable, but it's the best I have.)
In the IBL, by contrast, the average team gave up 213 runs, but just 170 of them were earned runs. Fully 20% of runs were scored as unearned. That means they would only partially be reflected in the batting statistics and the Runs Created formula, since fielding errors aren't included in on-base percentage or slugging average.
This is still far from accounting for the 25% extra runs scored over the Runs Created estimates, but it may explain close to half of it. I'd need to study the data further to know for sure.
Regarding explanation 2, it is not at all farfetched to suggest that Bill James's Runs Created formula is in large part a lucky guess. For a more detailed exposition of runs created estimates and the problems with them, see this essay by sabermetrician Tangotiger (and the sequels here and here), in which he notes:
However, the reason that Runs Created "works" is not because of its construction. It's purely an accident that it works. It just so happens that the points at which Runs Created and common sense intersect is exactly at the same points at which MLB teams play at!
To determine whether this explains the IBL results, we'd have to examine whether the IBL's teams play within the range of values for which Runs Created is accurate. I'll try to get to that some other time.
Chag Sameach, and enjoy the MLB playoffs!
Posted by iblemetrician at 4:39 PM