The Ratings Committee (RC) this year tackled a fairly short list of tasks. The main tasks involved addressing a question concerning ratings near the absolute floor of 100, and how to ensure greater correspondence between Quick Chess (QC) ratings and regular ratings. We also carried out a detailed examination of rating changes in the rating pool. Correspondence with Mike Nolan in November indicated that a much greater number of players were at the absolute rating floor of 100 this year compared to previous years. The RC was asked to consider possible responses to this issue. After some discussion, the RC offered the proposal that if an unrated player earns a rating less than 100 (before the player ends up with a floor at 100), then the player should remain unrated as though he/she had not yet played any games. This proposal borrows from the FIDE model of requiring a minimum performance to receive a first rating. In response, the Executive Board (EB) decided that it was preferable not to take any action, arguing that having an increased frequency of 100 ratings may be indicative of an increase in very weak players. They also preferred to continue the practice of issuing ratings once a player has played at least four games. An ongoing task from the last couple years has been how to address concerns that QC and regular ratings are out of alignment for players having both. The solution we proposed was to replace the QC system by one that rates all events, while keeping the regular system which rates events with time controls G/30 or slower. The revised QC system would likely be called something other than "Quick chess" - an alternative that was discussed was the "universal" system, but other names are possible. In implementing such a system, players would be considered unrated under the new system going back to the start of 2004, and then the "universal" system would be applied prospectively to all events from that point. There are several positives to this proposal. First, one's "universal" rating is equal to the regular rating until a player demonstrates evidence otherwise through a good amount of fast time-control events. Second, results from regular events will affect "universal" ratings, but not vice versa (for events with time controls quicker than G/30). Finally, any differences that persist in "universal" and regular ratings are arguably a demonstration that the player's quick and regular strengths differ. The EB had a positive reaction to the proposal. Currently, Mike Nolan is testing different strategies for initializing the "universal" ratings in addition to the method mentioned above. The ratings committee was also asked to address a few smaller concerns. We were alerted by the USCF office that the USCF was considering becoming involved formally in bughouse and Fischer random chess. The RC was asked about whether these variants could be rated by the USCF. The RC chair responded by mentioning that the likelihood was that separate rating systems would need to be constructed, but doing so was not problematic. When the USCF is closer to overseeing bughouse and Fischer random chess tournaments, the RC will become more involved in rating discussion and development. A second issue that arose was whether the RC should be involved in updating the USCF correspondence rating system. This was motivated by some lack of clarity in the online explanation, especially in the description of how the system handles provisionally rated players. The RC chair is currently in discussions with Mike Nolan about making small changes in the correspondence rating formulas, incorporating some ideas from the current over-the-board system to improve rating behavior. Every year the RC performs a set of diagnostic analyses to monitor trends in the rating pool. As is well known, overall rating levels have deflated from the mid-1990s through 2000 when rating floors were decreased by 100 points without a counteracting inflationary mechanism. With the new rating system in place, ratings have begun to reinflate. As a rough goal, the RC has been intending to restore rating levels back to where they were in 1997. Typically, the RC has focused attention on players with established ratings who have been active in the past three years, and who are aged 35-45 years old. Our analysis this year was more expansive. To summarize, we examined the difference in average ratings (for players active over the current and previous 2 years) between years 1997 and 2000, and between years 1997 and 2006. The results are shown in the accompanying figure. The dashed line, which displays the average rating difference between 1997 and 2000 as a function of age (the curve is called a "locally weighted scatterplot smoother", or "loess"), indicates that ratings in 1997 were higher than ratings in 2000 by about 60-70 rating points for players under 30 years old, and about 40 rating points for players older than 35 years old. The dotted line, which displays the average rating difference between 1997 and 2006, shows a much more optimistic picture. On average, ratings as a function of age are at most 20 rating points different in 2006 compared to 1997. This result indicates some degree of success of the new rating system with its bonus and feedback mechanisms, as well as higher K for lower-rated players. The RC will continue to monitor the rating pool for anomalous changes over time.