What Javier Taught Me

What Javier Taught Me

In March, I traveled to Costa Rica. The purpose of the trip was to gain a better understanding of how globalized surf culture and surf tourism impacted the country and its people. We spent a week speaking to locals and foreigners, exploring towns and villages along the Pacific coast, observing conservation efforts, meeting with community leaders, and discussing what we saw. I could write a lengthy post about everything I learned.1 But there was one event that stood out from the rest of the trip. It was a conversation with a man named Javier.

Javier grew up in a wealthy family in the capital, San José, where his life consisted of country clubs and tennis lessons. When he was 18, Javier traveled to a small coastal village of 300 people. He camped under the stars with his friends, explored the surrounding forests and mangroves, and fell in love with the place. He moved there permanently, opened a small surf shop, and taught himself how to surf, and gave mangrove tours.

“What I like about this place,” Javier told me, “is the whole town is one community of friends. If someone is getting married, we all come together and celebrate. If someone dies, we all go to the funeral.” This philosophy inspired the way he ran his business. “If somebody forgets their credit card or their money, I’ll give them their surf gear anyway. I trust people.”

For many, including members of his family, Javier’s decision to forgo the material comforts of San José was puzzling. But Javier’s philosophy was that he was happier not in spite of having less, but because of it. I didn’t have to take his word for it—I could see it with my own eyes. The man radiated positive energy. He was always smiling, and his skin seemed to glow. Although he was a week shy of 43 years old, Javier looked no older than 25. This was the first time I had ever met someone who seemed truly fulfilled.

I kept thinking about our conversation long after I left Costa Rica. Javier left his old life behind to pursue what held meaning for him. What about me? What held meaning for me, and was I pursuing it? The whole grind of college—writing papers that would only be read once by my professor and then discarded, writing code that would only be run once by my TA and then deleted, taking tests that encouraged rote memorization and regurgitation, building a resumé whose sole purpose was to get me a job where “success” is determined by prestige and money—just left me unsatisfied. Where was the real learning? Where was the impact?

I knew what I wanted. I wanted to make the world a better place, and I wanted to do it in a meaningful, impactful way. I found an essay by Paul Graham where he created a list of five commands that he keeps on top of his todo list. After reflecting for a few days, I modified them to create the five commands that I keep on top of mine:

Build things that help people2

Live healthy

Be happy

Help others be happy

Do new things

What can Javier teach you?

 

1) Just a sample of the other topics I also learned about: sustainability efforts and conservation, community activism, the effects of market economics on communal societies, how to surf, “soul surfers” vs competitive surfers, local vs foreign control of land/resources/money, localism and sentiments towards foreigners, income inequality, mating habits of turtles, etc.

2) I use “build” to loosely mean anything you create. You could build a movement, or an app, or a friendship. I’m building a book.

How to Eat Dinner with Barack Obama

How to Eat Dinner with Barack Obama

When I was a kid, my favorite TV show was The Simpsons1. During the commercial breaks I often saw ads for contests associated with some product, i.e. “Mail in the box top from your bran flakes to be entered in our $10,000 sweepstakes!” While the ads avoided running afoul of any illegal lottery rules by including a “no purchase necessary” clause in the fine print at the end, the hope was that you buy more of their product to be entered in the sweepstakes.

The Obama campaign did something similar during the 2012 election. They ran several “Dinner with Barack” sweepstakes, where entrants could win a personal dinner with President Obama. For every donation you made to the campaign, you were automatically entered into the sweepstakes. At the bottom of the donation page was the familiar “no purchase necessary” fine print, along with a link to enter the competition without donating. I entered myself a few times (who wouldn’t want a chance to speak directly to the President?), tediously reentering the information and submitting the form each time. As I did so I thought, “Why don’t I write a program to do this for me?”

Once I had the idea, I immediately noticed that there was a no CAPTCHA or any other method of preventing a bot from submitting the form. This meant I could write a 20 line Python script to fill out all of the form fields and use a POST request to submit it2. The Obama campaign would obviously be suspicious if the same person or IP address submitted 20 million forms in one second, but the script could be throttled to submit less often, perhaps once every ten seconds. Since these sweepstakes were usually announced about two weeks before the deadline, I could generate:

1 submission per 10 seconds = 6 submissions per minute= 360 submissions per hour =8640 submissions per day = 120,960 submissions in two weeks.

There were several of these competitions, so it’s possible that among them all I could have entered a million times (and perhaps even more, depending on how much script throttling is necessary to avoid detection). And if I could get several computers with different IP addresses submitting entries at the same time, my total entries could be on the order of tens of millions3.

Would this have worked? Although it would have been a lot of fun to try this out, I never implemented the script because of ethical concerns4, and I kept it to myself so others wouldn’t try to use it as well. When I read through the official rules and all the fine print, I found no prohibition against using an automated system. Given that the campaign staff had far more pressing concerns, they may have overlooked this simple exploit. But that is still surprising considering the Obama campaign’s reputation for technical prowess. Perhaps I should tell the President about this over dinner.

 

1 The Simpsons isn’t designed with kids in mind, but the show’s clever wit and gentle satire really resonated with me. As a longtime fan it saddens me to see the show steadily go downhill. The best season was undoubtedly season five, which came out in 1993 (the year I was born!).

2 Writing the script would have taken no more than an hour.

3 To further avoid detection, each instance of the script enter slightly different information, like “Gautam Narula”, “Gautam R Narula” “G. Narula” and multiple (valid) email and mailing addresses.

4 It probably wasn’t illegal, but this exploit definitely exists in the gray area which hackers (in the programming sense of the word, not the steal-your-identity Hollywood usage of the word) spend much of their time in. I felt it was against the spirit of the competition to use a bot to enter myself hundreds of thousands of times in the competition. Then again, maybe that’s why I never ate dinner with President Obama.

 

Building a Rating System

Building a Rating System

Most games today have a poor ranking system. Take tennis. The ATP ranking system is based on “points”, which are awarded by participating in select tournaments in the previous 52 weeks. The system is needlessly complicated and somewhat arbitrary, and does not directly take strength of opposition into account. The upper echelons of Halo and Call of Duty online rankings are, according to a friend, often populated with mediocre players who just play a lot of games, a result of a system that rewards sheer quantity of play over quality of results1. I played high school quizbowl, and the system for ranking the teams was subjective and fairly arbitrary. The top 20 or so teams were fairly accurate because people knew who the best teams were, but beyond that it was guesswork based on unreliable metrics2.

Chess has a different rating system, one that nearly every game and sport should use. The underlying premise of the Elo rating system, named after its Hungarian inventor, is that the purpose of a rating is to be able to predict the outcome of future games. Philosophically, it was based on a big data approach before big data existed–the only thing that should determine the rating formula is the corpus of games played. With enough prior games, Elo contends, you should be able to calculate the probability of any player beating any other player. The Elo rating system is better than many other ranking systems for several reasons:

  1. It rewards the quality and consistency of results over sheer frequency of playing. Three good results will gain more points than ten mediocre results.
  1. It provides a way to accurately predict the probability of one player beating another. In a two player Elo system, a player rated 100 points higher than his or her opponent has a 64% probability of winning3.
  1. The ratings directly account for the opposition strength. Beating strong opponents gains more points. Losing to weak opponents loses more points.

Developing the Algorithm

My friends and I started getting into the board game Settlers of Catan towards the end of my senior year of high school. We played a few games every week and would get pretty competitive about it. We’d argue over who was the best but had no metric to determine that other than, “well, he seems to win all the time.” After our first semester of college, we all came back home for the break and resumed playing. Inspired by chess ratings, I sought to create a rating system to put an end to the debate once and for all. I spent a day thinking about the various algorithms to use and a night coding an app that I later called the “Game Rating Calculator.” The rest of this post explains the process I went through in creating a rating algorithm, and the results after nearly a year of using it.

When I was developing a rating algorithm for the Game Rating Calculator, I was inspired to create a system modeled after the Elo system. I couldn’t port the Elo formula exactly, since it was designed for only two player games. I looked at a few approaches before settling (pun intended) on the final formula.

There are many variations of the Elo formula4. For the sake of clarity, all references in this post will be to the following formula:

Rnew= Rpre+ K(S-E)

Where Rnew is the new rating, Rpre is the pregame rating, K is the “k factor”, an arbitrary multiplier (a higher k factor means higher rating volatility), S is the total score in a rated event, and E is the total expected score in a rated event, which is calculated by:

formula

where Ropp  is the opponent’s rating.

Assumptions:

I took my base scenario to be a four player game of Settlers of Catan, where all players were of equal strength. Intuitively, each player’s probability of winning the game is 25%. Assume that A came in first, and, if it is possible to have a second, third, and fourth place, then B came in second, C came in third, and D came in fourth. To determine a player’s opposition strength, I take the average rating of all of the opponents. All calculations using an Elo formula are done with a K factor of 16.

Approach #1 – The Pairwise Approach

The pairwise approach tries to get around Elo’s inability to work for more than two players by treating multiplayer games with one winner as a series of pairwise matches. In a four player game with players A, B, C, and D, if A won the game, you would rate the game as the result of three individual matches: A vs B, A vs C, and A vs D. But that leaves us with a problem: A has “played” three games, while B, C, and D have played one. The official online Settlers of Catan website and ranking system attempts to rectify this by taking into account relative positions in the game. That is, the second place player (determined by the number of points at the end of the game) will have “played” three games, losing against the first place player and winning against the third and fourth place players, and so on.

There are several problems with this approach. In our hypothetical case of four equally rated players playing a game, each player should achieve each possible position (first, second, third, and fourth) 25% of the time. In this case, the winning player’s rating was calculated as if he won three games in a row. Since each of his opponents is equally rated, the probability of him winning any individual game is 50% and the probability of winning three games in a row is .5*.5*.5=.125=12.5%. This means the formula acts as if the winner has an accomplished a task that is twice as difficult as it really was, and therefore rewards too many points. Similarly, the last place player will lose too many points, because even though his odds of coming in last were 25%, his rating will be calculated as if he lost three games in a row, which only has a 12.5% chance of occurring.

We can use the binomial theorem to calculate the implied odds for the second and third place players. The probability of winning two out of three games against an equally rated opponent (second place) is 3C2(.5)2(.5)1 = 3*.125= .375 =37.5%. Similarly, for winning one out of three games against an equally rated opponent: 3C1(.5)1(.5)2= 3*.125= .375 =37.5%5.

We can summarize the results in the following table:

Position Actual Probability of Achieving Position Implied Probability of Achieving Position Through Pairwise Ratings Result
First 0.25 0.125 Overrated
Second 0.25 0.375 Underrated
Third 0.25 0.375 Overrated
Fourth 0.25 0.125 Underrated

We can determine the results using a standard elo calculator with a k factor of 32. Let all four players be rated 1000. Their new ratings would be as follows:

First place 1000 → 1048

Second place 1000 → 1016

Third place 1000 → 984

Fourth Place 1000 → 952

One could argue that even though the implied expected scores are off, since the players are all equal in strength, things should “balance out”. That is, in this example, the last place player is expected to come in first as many times as he comes in last, so things should even out. And if he doesn’t then he really deserves the lower rating.

The problem is that this really messes with the predictability with elo ratings. In that one example, there is now a 96 point difference between the first place player and last place player, which is so big that, after just one game, A’s predicted probability of beating D head-to-head had jumped from 50% to 63.5%. Even with a lowered K factor dampening the rating chance, the probability would still be overly optimistic for A. And since Elo uses the predicted probability (expected score) for calculating rating changes, this can mess with subsequent rating changes. And even if all the kinks could be worked out, it seems sloppy to use incorrect probabilities.

From a practical standpoint, this system also has two major problems.

  1. In a game where there is only a winner and multiple losers, this system wouldn’t work. In Settlers of Catan, the player with the second most points at the end of the game is not necessarily the second most likely to win. Games where a player in third or even fourth “leapfrogs” to first and wins the game are not uncommon.
  1. Having ratings dependent on positioning is ripe for manipulation. For instance, if you were were in dead last in a game of Settlers, you could offer to throw the game in favor of another player in exchange for help in moving up from last to place to third. This kind of manipulation is incentivized under this system.

Approach #2 – The Proportional Rating Approach

Elo operates on differences between actual performance and expected performance. As I studied potential solutions to a multiplayer Elo algorithm, I realized that if I could develop a way to determine expected performance in a multiplayer game, I could just plug it into the two player Elo formula. One way of doing this would be to give each player an expected performance equal to the proportion of their rating to the sum of the total ratings. If there were four players with ratings R1, R2, R3, R4, each player’s individual expected performance would be their rating divided by (R1+R2+R3+R4). In the case of our hypothetical example, each player’s expected score would be 1000/4000=.25, which is what our intuition expects.

The one issue with this approach it implies fairly slow improvement. Winning five games in a row is a pretty big deal (after all, in 4 person games the odds of that happening are .255= .09%), but the rating gain would only be 65 points, which means that in a sixth game against 1000 rated opponents the expected probability of winning would only be 1065/4065= 26.2%,when it clearly should be higher.

As for evaluating its accuracy, there was really no way to test if the expected score was reasonable other than to eyeball it for completely inaccurate values. In a four player game with players rated 2000, 1000, 1000, and 1000, the first player has a 40% chance of winning, which seems reasonable for a player who is twice as good as each of his individual opponents. The only problem is that it would have taken so many wins to get to 2000 that the player is almost certainly underrated to begin with.

Approach #3 – The Modified Expected Score Approach

There was one final approach to examine, and this was the simplest of them all. Take our base scenario of four players, each rated 1000. The expected score of each player is .25, while the expected score in a one game, pairwise comparison is .5. In a five player game, the expected score is .20, while the pairwise comparison score is .5. In a six player game, the expected score is .17, while the pairwise comparison score is .5. The expected score for a game of N equally rated players can be generalized as:

2Ep/N

where Ep is the expected score in a pairwise comparison. The key question was, could I expand this formula for the expected score beyond the case of all players being equally rated? I added this multiplier to the Elo formula and tried it out with a few hypothetical games, and the rating adjustments seemed to make sense. But the only true way to establish this was to actually use the system in production. After all, Elo was revised over 40 years of real world usage (though others have found better algorithms). There were two things I really liked about this formula:

  1. It’s point neutral among established players (not counting rounding errors)6 .
  1. It gives a reasonable point adjustment for different size games.

This seemed to be the most promising approach and the algorithm I ended up using. The only issue that might arise is how the ratings would operate in the case of a player rated far above the rest. In a four player game, the theoretical maximum expected score is .5, which means that even if a player were rated one million points higher than each of his opponents, he’d still gain a sizable 16 points from winning. This issue only came up in such unrealistic scenarios, so I wasn’t too worried about it.

Provisional Ratings

The United States Chess Federation has a concept of provisional ratings. For the first 25 rated games, players’ ratings are more volatile7. The idea behind this is that it will more quickly get players to their “true rating”. It also takes into account that moving from casual chess to tournament chess is a disruptive process that has a learning curve, and that a player’s strength will be more volatile as he moves up that curve.

I didn’t include such a system in the rating calculator for a few reasons. First, it is easy to play such a large number of chess games, which makes reaching 25 fairly simple. In a game like Settlers of Catan, where you need to gather at least three (usually more) people for 90+ minutes, this isn’t practical. In the rating pool I set up, only five people had played 25 or more games after 11 months.

There is also a real risk of rating deflation, depending on the number of provisional games. For example, let’s say a provisional rating is twice as volatile as a non-provisional rating. That is, you will gain or lose twice as many points if you are provisionally rated. If there was just one provisional rated game and an average game of Settlers had four players, 75% of the players would have their ratings deflated, since 75% would lose their first game. That 75% would get only half the points they would have otherwise gotten when they win later.

The idea is to tailor the number of provisional games to the point where players are equally likely to get their wins and losses in that set of provisional games as they are to win and lose in general.

The provisional period is five games in my rating pool. At the time, the average game seemed to be around five players. We’ve played many more games since then, and the average has dropped to four players per game. In the first four games, an average player is expected to win one game. But in the fifth game, 75% of the players will lose. In this case, 75% of the incoming players have a deflated rating. Unfortunately, due to a desire for consistency in the rating system, I can’t dynamically adjust the number of provisional players each time. I may retroactively recalculate the ratings based on an average game size of four, since that has held stable for a while.

 Results

It was time to put this rating algorithm into production, and see how it held up. As of this writing, 80 rated games have been played, and a total of 28 players have played at least one rated game. Below are all the results for the players who’ve played enough games to not be provisionally rated (6+).

Player Rating Games Played Average Game Size Winning Percentage* Expected Winning Percentage Net Difference **
K.S. 1073 44 3.82 40.9 26.2 14.7
W.A. 1052 13 4.85 38.5 20.6 17.8
G.N. 1052 38 4.37 31.6 22.9 8.7
A.S. 1028 60 3.72 25 26.9 -1.9
K.S. 1012 41 3.88 31.7 25.8 5.9
I.M. 984 16 4.5 25 22.2 2.8
M.R. 977 7 5 14.3 20 -5.7
A.S. 965 10 4.9 10 20.4 -10.4
G.M. 937 9 4.44 11.1 22.5 -11.4
C.W. 934 6 4.5 0 22.2 -22.2
S.S. 889 29 3.66 13.8 27.4 -13.6
B.S. 859 12 4.25 0 23.5 -23.5

Average: 980.2

Median: 980.5

Average game size of all games played: 3.93

* The Expected Winning Percentage is based on the average game size, assuming all other opponents were equally rated

** The Net Difference may not match Winning Percentage- Expected Winning Percentage due to rounding

At first, I arbitrarily started off all players at 1000. Once there was a decent sized pool, I started off all new players at the median rating of non-provisional players. The median rating has dropped over time, probably because of the deflationary effects of the provisional ratings I mentioned earlier.

Overall, the results look pretty good. The ratings intuitively correspond with how I feel most people should be rated. Importantly, the ratings fit one of the main criteria I wanted in a rating algorithm: it awards the quality and consistency of results, and not just the sheer quantity of games. For instance, W.A. and I are rated exactly the same. He has a much better net winning percentage than I do, but that is compensated by the fact that I’ve maintained a decent percentage for far more games. His strong results are balanced by my consistency. There is one obvious outlier here, and that is A.S. A.S. used to be much lower rated (below 900) but has recently surged with a lot of strong results. One possible explanation for his high ratting and low winning percentage is that he played a lot of games with high rated players. I haven’t calculated each player’s average opponent’s rating, so I can’t say for sure.

I’ll eventually put this all in a SQL database to automate some of the data gathering. But the best way to improve the rating system would be to run data analysis algorithms on the results. How accurately do the ratings predict results? Unfortunately there is still not enough data. Heuristically, I’d think there would have to be a rating pool on the order of at least 25 people who have played at least 25 games to accurately assess the system.

But for now, this algorithm will do. If you’re interested in learning more about the Elo rating system in order to develop your own, check out the book by Elo himself, The Rating of Chess Players: Past and Present.

Tl;dr: Download the desktop version and Android version of the app.

 

1 But then again, that’s what the game makers want. These days games usually have some sort of in-game store or revenue steam, so more play usually means more money.

2 There have been some attempts to institute a chess style elo rating for quizbowl, but the ratings aren’t very accurate since data is sporadic.

3 Assuming you are using the Elo formula mentioned in this post.

4 If you’re interested in learning more about the Elo formula, here is a good treatise about it by Mark Glicko, the chief statistician in charge of the United States Chess Federation implementation of the Elo formula.

5 You could have also intuitively determined this result by noting that the probability distribution for the four possible outcomes were symmetrical, and since winning three in a row and losing three in a row are each .125, the other two possibilities must add to .75, hence .375.

6 It’s interesting to note that Elo system used by the United States Chess Federation and FIDE (the world chess federation) are not point neutral, because they have a K factor that declines as rating goes up. This was apparently used a way to stop rating inflation at the top of the rating pool. Not all chess organizations agree with this—the Internet Chess Club, the premier online chess server, does not use variable K factors outside of provisional games.

7 The volatility decreases within the set of 25 games.


If you’d like to receive my future posts in your inbox, enter your email below:

You'll receive all of my future posts in your inbox.

 

Using Data to Improve Your Chess

Using Data to Improve Your Chess

When I was 16, I went on a hiatus from competitive chess. Although I would occasionally play in tournaments for fun and out of habit, I stopped actively training and sure enough plunged in the rankings. But by early 2012, I had grown frustrated with losing and was ready to come back. I spent all of January studying and playing training games, and I registered for a four round tournament in early February.

As is typical in Swiss paired tournaments, the first round was heavily mismatched, and I quickly dispatched a lower rated opponent. In the second round, I played black against a strong opponent. My opponent played an offbeat opening that gave me a nice, nearly winning advantage, which I promptly threw away. I had to settle for a hard fought draw.

The third round was a quick win with white against another fairly strong opponent. Although I wasn’t familiar with the opening, I was in control the entire game, leading to this flashy sacrificial win:

I was 2.5/3 moving into the final round, and winning this game would assure me first place in the tournament. I played as black, and the game started in a familiar opening. But by move ten I was feeling uncomfortable with my position. I struggled under the mounting psychological pressure, and collapsed on move 16, resigning a few moves later.

As I drove home from the chess center, no prize money nor rating gains in hand, I wondered about the results. Was it just a coincidence that I had struggled so much with black and dominated with white? Or was there something deeper going on? When I reached home, I looked through the results of all of the tournaments I had played in the last year. Was there a connection between my performance with white and black?

Here were my results:

White
Score: 7.5/13
Rating Performance: 2009

Black
Score: 3.5/10
Rating Performance: 1661

The weighted average of my performance rating was 1858, identical to my rating after the tournament. Converting the rating difference to statistical predictions, my performance suggested that I was 7.3 times stronger as white than I was as black. What was causing this enormous disparity? The advantage white gets from having the first move is so slight that it shouldn’t have any impact below professional level. I analyzed my games, and noticed a pattern with the ones I played as black. In those games, I would often struggle in the opening, and make uncharacteristic positional mistakes or blunders. I often got far behind on time, overthinking positions. I didn’t have confidence in my play, and usually thought my position was much worse than it was objectively was. Eventually the psychological pressure would reach a breaking point and I would collapse.

This all had to do with the psychological effect of openings. With white, having the first move allowed me to steer the position into one that I was comfortable with, even if the opening was unknown. With black that comfort often isn’t there, and in many of my games the psychological pressure of being in an uncomfortable (though not necessarily bad) position and taken out of book1 caused me to make some terrible positional mistakes and outright tactical blunders. In the best case, I would hold onto a decent position but get far behind in time.

I spent the next week focusing on two things: improving my opening knowledge as black, and keeping my cool in psychologically uncomfortable positions. At the end of the week, I played in a large, multi-day tournament. There were some close calls, but in the end I won the tournament with a score of 4.5/5, 1.5 points ahead of the next player and with the widest margin of any section winner in the entire tournament. This was the first time in two years—since the time I was at my peak—that I had won a major tournament. My performance was as follows:

Overall

Score: 4.5/5

Rating Performance: 2075

White

Score: 2/2

Rating Performance: 2270

Black

Score: 2.5/3

Rating Performance: 1920

There’s a viable argument that I just had a good tournament, since I outperformed with white as well as black, and the difference in performance rating was still about 350 points. But then, maybe it was a positive feedback loop from doing well with black. After all, there is a large psychological carryover from previous games in chess. I could have played much better as white by not being as psychologically or physically drained from games I played with black. It’s hard to tell from one tournament.

Nonetheless, I noticed a definite shift in the way I was playing and how I was psychologically reacting to unfamiliar positions. And regardless of the result after merely one week of training, I was able to pinpoint a weakness and target my training regimen appropriately. I seemed poised to make my comeback to chess. Unfortunately, the Atlanta Chess Center, where the vast majority of tournaments in Georgia are held, went bankrupt just a few weeks later and so my return was put on indefinite hold.

There is a lot of potential for this sort of approach. When chess players decide what to study, it’s typically off of gut instinct. It’s easy enough to see which specific areas of your strictly chess abilities are weak (there are books for that), but data driven analysis can give insights into the less obvious areas of chess performance. In addition to performance with each color, you could analyze performance at different time controls, different levels of tiredness (rounds early on in a tournament versus later rounds), and even individual opponents2.

It would be interesting to develop machine learning and data analysis algorithms to look at these areas. The main problem here is that data is not easily available. Although the United States Chess Federation (USCF) keeps a database of wins, losses, and rating changes, they have no API for access, and they only recently started (sporadically) tracking which games were played as white and black. Popular chess software, like Fritz and Chessbase, can automate this somewhat, because they allow you to filter games by ratings, openings, and dates. This would help narrow down the games to analyze, but most of the useful data analysis still has to be done by hand. Writing data mining software for chess would be a cool project, but it wouldn’t automate everything. Some components of chess performance (tiredness, psychology) are qualitative. Data mining can find patterns, but it’s up to you to figure out what those patterns mean. Using this software would also require the user to keep many detailed records that the USCF doesn’t: time control, color, notes on psychological and physical conditions, etc.

But in the end, the effort would be well worth it. Facebook and Google are successful because they mine data to offer targeted advertisements to their users. Amazon and Netflix use machine learning to predict what products and shows you’ll like. And using data, you can target your chess training for better results.

 

1 “Out of book” means out of the previously established opening theory.

2 Analyzing results against individuals can actually be a very useful exercise. Many times, a statistically poor score against an individual (say, scoring 2/7 against an equally rated opponent) can reveal weaknesses against particular openings, playing styles, or psychological conditions.

Recommended Posts

How to Get Good at Chess, Fast