Dispatches from #SSAC16

These posts are all about the 2016 Sloan Sports Analytics Conference (or SSAC16 for short). You can check out all of my posts related to SSAC16 by clicking this link. If you're attending, don't be shy about saying hi! Email | Twitter | LinkedIn | Comment below

Can you Really Predict Athletic Injury and Performance?

Dr. Phil Wagner of Sparta Science has developed an assessment and protocol for preventing injuries across all sports by jumping. Jumping as an assessment is interesting since it is involved in many different body mechanics (balance, force of the jump, adjustment to landing, amount of swaying, etc). Apparently, this is not new science -- jumping assessments for injury prediction has been around for years and well-studies.

Wagner packages his product -- and most of his talk seems like a product pitch, really -- as a way to assess your exercise, then modify your training regiment to increase or decrease particular movements that correlate with injury, and finally assess again and hope you are on the track of prevention. Wagner does a very good job of not saying this is causal -- and I am not sure whether that came across to the audience -- and maintains that this is a method to save teams and programs money while also extending the performance lifetime of a player.

Analytics: he uses what seems to be a custom metric called a "T-score" -- which bares little resemblance to a t-value, and is seemingly similar to a z-score. He never goes into detail about how it's calculate mainly because that's what he's selling. And a theme throughout the conference is more along the lines of how analytics are being used and not really what analytics are being used.

Putting the Data Science into Sports Science

My initial assumption was that this was going to be about data science applications in sports science. While a lot of it touches that stone, much of the talk was about defining sports science and what are the open areas of research that have not been explored. Ray Hensberger from Booz Allen Hamilton talks about the four key measureables for an athlete: physical attributes, motions, biochemical makeup, and mental/cognitive makeup. He goes on to say these measureables help inform health optimization (through resting, eating, training), injury prevention, and game strategy. Hensberger never touches on game strategy, but does spend a chunk of time discussing the cost of injuries (NBA teams apparently lost $350M due to player injuries) and predictors of injuries (specifically fatigue).

Hensberger points towards the future of assessment being more within the mental makeup realm, specifically citing EEG as the next wave of assessment. I asked Hensberger after the talk how far off Booz Allen Hamilton is from collecting EEG data and he said that is starting within this year for them.

As a cognitive neuroscientist who talked about the lack of opportunities in industry to still be a cognitive neuroscientist, I find this extremely interesting. Hensberger did say he "wasn't an expert... yet" in the details for implementing cognitive/neural research on athletes, but he did make it sound like this is a sure-thing for Booz Allen Hamilton. Will be interesting to see who they bring on to start this side of industry.

Fan Engagement, Player Performance, Prevent Injuries: VERT Wearable for Athletes

As you may assume, I was interested in knowing more about wearables throughout this conference since wearables are collecting data and I'm interested in data analytics. This particular talk by Martin Matak was less about what their wearable technically does and more about what their wearable is doing -- particularly for fan engagement. VERT tracks all sorts of motion-related data, and is particularly useful for tracking height of a jump. Matak demonstrates with a series of videos (basically commercials) demonstrating how VERT is used in stadiums and on broadcasts. Sure enough, it does provide near-instant feedback and can be broadcasted in near-real time, providing more data to fans and allowing them to better contextualize events occurring in-game.

Matak touches on injury prevention and the use of VERT in training -- the device is pretty small (the size of a USB flash drive) -- and ultimately how to keep players playing, and how to optimize their performance.

This talk was less about how they measured things and more about what they were measuring. There was no specific call for action or future of VERT. In general, I felt like I was at an in-person commercial. Which was fine (I was expecting that) but I wasn't expecting to see actual commercials while a guy stood on stage.

Road to the Championship: Playoff Analytics

This panel had some high-ranking representation from different US sports: Oliver Luck (NCAA), Chris Marinak (MLB), Eric Nyquist (NASCAR), Todd Durbin (MLS). Also on the panel was Neil Paine from 538, representing both a media-side and a professional basketball side of the panel. The major theme of the panel was what makes a champion?

Each panelist took turns discussing what they believed made a champion and many stayed fairly political -- saying things like "a champion is the team that hoists the Cup". An interesting question from Paine was should the best team on paper be the champion? which was countered with then why even play the game if we can crown champions with simulations? All agreed that you play the game to actually determine whether the on-paper stats translate to best team performance.

They were asked how their regular season sets teams up for their playoff season, and whether there were too many games or not enough games -- or even too many teams. Durbin seemed most prepared to answer these sorts of questions, probably because the MLS is still growing and they are growing their scheduling and playoff seasons to match their league needs. He says that emphasis on conference play and winning the conference is absolutely important to the playoff process -- thus, their scheduling reflects this emphasis. For NCAA football and MLB, there is no real parity within the season -- teams play within their conference or division but cannot play every team in the league the exact same way. Luck goes on to say NCAA football cannot be changed much since you are asking not just a league but a school, and multiple schools, to adjust to a new schedule which may extend student athlete's travel time and also disrupt student athlete academics in the process -- not to mention, a potential increase in costs to the universities for something like this. When Marinak was asked about "are there too many games?" which was countered actually earlier on in the panel with Paine saying statistically it seems that 1000-1500 games are needed to truly crown a champion. Obviously no league could ever make a season nearly three calendar years worth of games, but it does give rise to the next and perhaps most interesting question of the panel: what do you think is the best playoff system out there?

Oliver Luck immediately said "European football" which I'm assuming meant UEFA Champions League. Although perhaps surprising to some, NCAA football (and basketball, to some degree) tries to mimic this format by crowning conference winners with entry to an inter-conference tournament of some sort -- like the UEFA Champions League entry requirements. The differences then become timing and playoff formats. UEFA games are played on an aggregate scoring system, where two games (one home, one away) are played. Basically being a 180 min soccer match. Marinak said this was not possible in the MLB really -- playing what would be an 18-inning game and comparing total runs scored across multiple games would be outrageous.

Neil Paine went on to say the NFL has the best playoff system. A single elimination tournament that rewards all the division winners, with a wild card game that rewards teams from each conference with the highest winning percentage who are not already division winners. You couldn't play a series of games in the NFL in order to see whether a single win was based on skill or a fluke, but that would be ideal.

There were no analytics really mentioned in this talk. A lot of data was alluded to -- how they inform their playoff system, how they inform their regular season match ups, etc. -- but nothing really presented. And I'm not sure on the league-side of statistics whether that is something to protect or not. Obviously for teams, they are protecting their analytical edge. For the league however, they aren't competing against any other leagues in the US. Each league is independent of each other and has their own fan base. Sharing of data and analytics from the league-side seems like it wouldn't do harm and could actually be beneficial.

The Curry Landscape

Kirk Goldsberry helped revolutionize how we visualize basketball with his cartographic approach to data sports data visualization.  His presentation, which was mostly based off of his 2014 Grantland post, was a discussion of whether to move the NBA 3-point line. Using visualization, he discussed possible routes for moving the 3-point line: 1) move it back to where the average shot is made 33% of the time, 2) move it back to 25 ft, but eliminate the corner 3, 3) move it back to 25 ft and increase the width of the court to 3ft more on each side, 4) do nothing.

In what was probably one of the most TED-like talks of the day, Goldsberry says there isn't necessarily anything wrong with the 3-point line -- especially if you like how the game is played currently with an emphasis on passing and team chemistry.

Analytics in this particular talk have already been presented and use relatively simple metrics -- FG% from specific locations of the court. It was a very clean, fun, and easy to understand talk, just not really mind blowing.

Biometrics: The Next (and Biggest) Analytics Frontier

Andy Glockner discussed the future of biometrics in sports. The current state of biometrics is that it is not universally accepted -- some coaches like it, some athletes use it, some front offices dabble in it. But for those who are not all-in on biometrics, they usually have strong opinions against. Thus, biometrics itself is a polarizing subject. Glockner makes the case that the purpose of biometrics should not be seen as evil -- as in, used against a player in negotiations or trades -- but rather seen as beneficial to player health and safety, ultimately sustaining the careers of athletes.

Glockner also asks who ultimately owns the data? Is it the athlete? The front office? Coaches? The researchers? The league? Ownership of data itself is problematic in the sense that it would be identifiable data -- which could be used against someone if the results were not favorable.

Two issues that I thought of during the session had to do with data collection and data management/ethics. For collection, one of the biggest issues in sports analytics are signal-to-noise ratio, in which there is a large amount of noise for these sensitive measures which usually need near-motionless collection. There was no mention of filtering techniques used to address how these more sensitive biometric data controlled for movement. There was also no mentions from any of the sessions on a specific data collection tool about how their hardware takes into account motion artifacts in their data. Specifically speaking for the neural data collection, movement heavily distorts data. Sometimes to the point where the data are useless. How are other biometric methods avoiding this?

For data management and ethics, identifiable data (e.g. game performance data) are useful for analysis for comparing individuals to team or league averages -- also, to other players. But when things are more sensitive and not necessarily directly a performance measure -- like biometric data -- use of these data can come with either good or bad intentions. And in the quest for knowledge, the science community at large abides by the idea of doing no harm to our participants and maximizing benefits for participants and the public. Privatized industries also have to abide by laws of ethics (e.g. HIPAA), but currently have no guidelines on handling data. Nothing is deidentified. Nothing goes through some sort of check or balance from a league office or official. What happens when there are biometrics that indicate larger health risks? How does that influence a player's season -- or does no one know about it? Ownership of data is one factor -- the use of that data and how it can help or hurt an athlete is another factor, and one that needs to be addressed sooner than later.

Wearable Technology: From the Practice Field to Prime Time

A large theme in this talk was what is the intention of using data? Some people feel data dictates decisions, but really data should inform decisions in a way that also incorporates other (possibly non-data-driven) metrics as well. One panelist said, "Data should always compliment common sense," and I agree with this sentiment because the direction of science flows from testing a phenomenon to developing new knowledge -- and hopefully "new knowledge" in turn becomes "common sense".

This particular talk had high hopes but did not have very well-informed statements. One panelist said they want to start moving wearable technology into the high school and younger markets. Although it seems very cool from an adult standpoint to have more information on the athletic performances of children, the research suggests an increase in spending on youth sports tends to decrease the youth athlete's interest. So while it would be cool to do this, it may also have a negative long term influence on the games you love. Another statement was based around brain training. Outside of simply just gaining knowledge and having experiences, so-called "brain training" via technology intervention is relatively unfounded. Use of neurogaming apps are largely disavowed by the scientific community who research the efficacy of modulating brain/behavior through technology. In particular, these consumer neural stimulation devices, which generally are some sort of transcranial stimulation instrument, also have shown no effect on neurological modulation to date. Research on behavioral modifications through neural stimulation has not been robustly studied, but current literature reveals there are no replicatable cognitive benefits -- similar to the consensus on brain-training apps. There seems to be evidence for neural stimulation helping older adults (>60 yo) with motor and cognitive performance, but currently no evidence supporting younger adults (or children) for "better" or "faster" motor and cognitive performance -- let alone, long term enhancement.

eSports for the Win

The panel for eSports probably would attract most gamers -- some heavy hitters in the eSports community were on the panel -- but for me, I really wanted to see how having analytics for literally everything has helped get eSports to where it is today -- boasting more total viewership than any traditional US sport (save for the NFL).

However, the panel was less about how they have innovated their field with data and more about the history of eSports, with a lot of the discussion based around revenue. And then it was a comparison between eSports athletes and traditional sports athletes -- how eSports athletes also train and have PT and practice and work towards high-level competitions.

Two areas where data has played a large role in shaping the eSports field that are unique from traditional sports: match up scheduling and broadcasting. Match-ups are data-driven and tailored to always have high levels of competition -- as well as evenly matched levels of competition throughout the realm of eSports. This helps drive viewership up and keep viewership consistent across the calendar year. eSports can do this because there is no real set season schedule -- set schedules occur at the tournament level but not during what would be considered the "regular season". This would be like having the NFL have 16 games for each team, and each team is seeded and reseeded after each match to always ensure the closest match ups throughout the season. Although not necessarily feasible currently, it would make for near-guaranteed marquee match ups per week as opposed to guesswork from pre-season data.

Broadcasting is also different in eSports, as broadcasting is both streamed on various internet platforms (Twitch, YouTube, Streamup, etc.) and also covered in various media, ranging from single-blogger-level coverage to ESPN-level coverage. eSports infrastructure panelists discussed how there exists data for how viewers interact and what they interact for -- whether particular shots of points of view engage more conversation or otherwise. This, in turn, can dictate more viewer-controlled streaming and content -- leading to a unique streaming experience for an individual. The MLB, NFL, and NBA have all tried implementing new ways for viewers to experience the game -- introducing different camera angles available to stream, in-game stats and analysis, and on-court sideline reporting. However, the advantage of eSports is the ability to literally be in the game and watching exactly from the point of view of an eSports athlete. The NFL has tried using its overhead camera angle to give more of a quarterback POV to the game, but has only been used sparsely, and particularly only for kickoffs or supplemental replay angles. Would be amazing to have multiple network channels dedicated to multiple views -- as opposed to having to digitally stream these views -- but I believe that would be a lot of work for not a lot of return for traditional sports.

Basketball Analytics: Hack-a-Stat

In a talk labeled "hack-a-stat," featuring Tom Thibedeau, Brian Scalabrine, and Mike Zarren, there was literally zero discussion of the hack-a strategy. Welp.

There was, however, very informed discussion on data and how front office, coaches, and players interpret and use data. Some of the most well-stated lines of the conference came through on this panel, such as not all teams will gain an edge with data -- all data can also introduce more errors. Sometimes, we think data helps create clarity but at times (especially if the data is noisy) it can create haze, and that haze can lead to error. However, it was noted that data and analytics in sports today are more accurate, more advanced, more unique than at any point in time.

Data from everyone's perspective on this panel echoes a sentiment from many at this conference: data helps inform injury prevention via fatigue analysis -- leading to informed rest. Rest and injury prevention was maybe the one thing apparent in almost every talk discussing the use of analytics. Which is funny since resting state research in cognitive neuroscience is also one of the most discussed things I hear at neuroscience conferences. Another aspect of data as an aid is through acquiring new talent. Apparently during the 2012 draft, the Celtics said Draymond Green was their 3rd ranked player on the draft, where Anthony Davis was the overall #1 for most teams (to which Scalabrine chided Zarren on why they didn't draft him -- they had two draft selections in the first round).

The conversation steered in the direction of things you cannot measure, like "heart". They particularly mentioned Draymond Green's "heart" -- how do you measure "heart" or "drive" or "passion"? Also, other non-measureables like psychological defense, namely "The Howard Effect" where if Dwight Howard is just standing in the paint, players are less likely to attempt to take a shot inside of the paint just because he is present. This effect holds true for players like Marc Gasol and Andrew Bogut, the panelist stated, where their defensive statistics on paint shots aren't remarkably amazing, but the fact that they stand there psychs ball handlers out to the point where Gasol and Bogut have less paint attempts against them overall -- even though statistically they are mediocre paint/rim defenders. This is something that has no statistic assessment because we cannot know whether ball handlers are actually interpreting their presence this way. But it isn't unfounded -- most psychological aspects of sports during play are only conjecture.

One of the last big topics were about in-game data analytics informing plays. There was initial hesitant no responses until one of the panelists (I believe Thibs) brought up overtime. How do you rest a player based off of a cap on played minutes when OT occurs? In-game analytics are not just difficult, but rather impossible. In another panel, you'll hear about how analytics changed a style of play, but not within a game. As Thibs said, play to your strengths. Just because you may be outplayed doesn't mean your best game plan isn't the correct one. Data should echo that sentiment.

A few very flash-in-the-pan conversations happened -- this panel should honestly have a recurring podcast, but will never happen:

The three point line. They all agree nothing should be done.
The game, aesthetically, is more interesting than ever. If you don't enjoy the spread pick-n-roll, then basketball may not be your sport.
Wearables is a hot topic, and one to be treated with much sensitivity and respect.
Is 82 games too many games? Thibs says a cautious yes. The perfect amount games, as implied, would be a season where you didn't have to rest your best players. In this current system, you have to do that.

Analytics-wise, the state of (publicly available) basketball analytics are only as good as the application. Currently, the applications are: injury prevention/rest analytics, very general performance analytics, almost zero league-side analytics.

Evolution of Sports Journalism

This may perhaps be one of the most true-to-title panels at the conference. The initial question was a basic one: how has journalism changed with the advent of data? Each panelist had very valuable opinions from not just their specific niche in media but also their contribution to media at-large: Jaymee Messler (The Players' Tribute), Carl Bialik (538), Ethan Sherwood-Strauss (ESPN), and David Dusek (Golfweek Magazine).

Messler was a particular outlier in this talk, mainly because she promotes the posts as told by players in professional sports. She provided very unique details about how The Players' Tribune has tried to stitch the fracture between media and athletes by allowing athletes to tell their own story through a medium that would not be overshadowed by major media. Specifically, with the advent of social media, if athletes have more control over their narrative, it would make the most sense for athletes to report their own narrative -- may it be the only story about a subject that is posted to the internet or a side of a news story that has been repeated throughout all media for a period of time.

Dusek, whose interactions with golf athletes have noticeably become more difficult by his own accounts, stated also social media has given rise to a voice for the athlete, and even when they do not have access to a first-person account, they find a way to say their own narrative or speak directly to fans.

Sherwood-Strauss spoke about how athletes actually do not feel so empowered as described by Dusek or Messler. He recounts a story about Klay Thompson (GSW) where Thompson was featured on a podcast for nearly a hour. Sherwood-Strauss brought this up to Thompson, to which Thompson essentially replied who would care what I was saying?

The next discussion point was about media and their coverage -- since mainstream media coverage is so overwhelmed with major story lines (e.g. Golden State of today, Tiger Woods of the late 90s) are reporters missing "things" (stories, details, moments, etc.)? Perhaps the best answer was Bialik, where he detailed reporters are encouraged to report ASAP and as accurately as possible but in the world where information travels globally within seconds, "we always miss things".

They switched over to how data has informed reporting and everyone seemed to be in agreement: reporters are posting not just who is winning but also what is interesting -- about the game, the team, the players, the coaches, etc. Sherwood-Strauss went on to specify that the classic game column is dying and that the game column is evolving into other stories now: stories with specific angles, data-driven stories, etc.

Messler, discussed how athletes are honing their skills in writing and media, perhaps in hopes of a career beyond the sport. And the workhorse athlete who is not necessarily represented in media are people behind-the-scene (editors, managers, etc.) who also contribute to media but are not as visible. Athletes are more enabled to begin establishing careers beyond the sport more early and often than ever.

As athletes begin to represent themselves more often in media, Messler states that athletes also begin to respect the process of media more -- bringing the gap between athletes and media closer together. A panelist noted that the fissure between reporters and athletes have never been greater and that it is becoming more difficult to perform what was basic interview or post-performance questions now than it ever has in the past. However, for reporters like Bialik (who represent an unbiased, data-driven reporting angle), specific angles or data-driven reports are apparently viewed as more transparent and less threatening than other beat writers, where their reporting angle may or may not be supportive or not supportive.

Dusek states the market for golf statistics has never been greater. The lack of advanced analyses for golf is mostly unheard of and he wishes he could go back in time and take a statistics course -- to which many of the panelists also agree with this time-traveling proposition. A number of panelists further admit they and/or their bosses are not savvy in data analytics, so the data analytics that are presented in articles are as accurate and interpretable as the writer generally sees fit.

Messler goes on to state that data has helped inform beyond the content within the article -- The Players' Tribute has collected data on readership and can steer better content based on analytics. This was a take no one spoke up about -- to no surprise, as Messler is in control of data analytics for TPT whereas other reporters on the panel were either more shy to say what dictates their content (perhaps "trade secrets") or they simply did not know/have access to the statistics Messler has for her site.

Dusek continued to speak on behalf of his golf fans, in which he said he knew the demographics of who reads golf news and that content delivery isn't necessarily hard -- advanced data driven statistics are not needed to inform more state-of-the-art content.

1st and Goal: Football Analytics

One of the running question cliches in these panels: what are analytics to you? Every single person in the building has a different answer. Pair two mathematicians together and they'll differ. I have a personal answer that is based in the scientific method -- that is completely different from what many of the panelists said. Atlanta Falcons general manager Tom Dimitroff said, "Analytics confirm" which is perhaps the most vague scientific statement you could have made.

Sports analyst Sandy Weil said, "Analytics are about questions." This is also a vague scientific question that won't get you into trouble. Weil goes on to specify analytics are not just stats. It's beyond "just stats". Which I do believe is most specific to what the nature of analytics are -- a mix between data results and interpretation.

Reporter Mike Reiss said, "Analytics aren't just all numbers either -- it's only one piece." Also a very true statement -- many panelists throughout the conference have discussed how analytics are there to supplement or inform a larger statement. At times, statistics and analytics are the star of a report, but that is seemingly more rare than traditional reporting that has no statistical reporting within. Reiss goes on to say that statistical usage is sometimes unknown and that the intent of using stats or analytics needs to be more transparent.

NFL player and PhD student John Urschel countered Reiss by saying they are attacking a strawman -- rather than speaking to only one instance of a game or play, why not develop a more-informed line of reasoning using a larger sample size (and therefore, a clearer analytical representation)? Weil goes on to say that stats are perhaps not a driving force in reporting but rather supplementary to football reporting. There's discussion of small sample sizes in the NFL and that there is only one really strong analytically-driven portion of the NFL: the NFL draft. Outside of the NFL draft, traditional (e.g. frequentist/p-value) analytics are not reliable.

One panelist suggested qualitative (not quantitative) measures are perhaps more valuable to the NFL, given the sample size. Urschel suggested the sample size for any specific analytical assessment is adequate so long as your hypothesis is also sound.

Analytically, it sounds like the NFL and American football as a whole are behind on the analytics front. And many of the issues that are publicly seen (e.g. small sample sizes, high player turnover) affect the data analytics of a team.

Modern NBA Coaching: Balancing Team and Talent

This panel featured Scott Brooks (former Seattle Supersonics/Oklahoma City Thunder head coach), Mike Brown (former LA Lakers and Cleveland Browns head coach), and Vinny Del Negro (former LA Clippers head coach).

This was perhaps the most uninspired panel -- even with former coaches and not active coaches -- and many points throughout the conference were harped on. For example: holding players accountable when they make mistakes; don't copy teams who are winning, use the talent on your team to engender good team play on competition days; team culture relies on players holding each other accountable; how data informs coaches on rest/injury prevention; how data can sometimes introduce new errors you haven't accounted for (either the data is erroneous or a new emergent error in your team).

Some coach-specific questions: is a swing-4 who is a 3-and-D player needed? Del Negro suggests a S4 isn't needed if that's not what the makeup of your team provides -- play to your strengths, not to someone else's game. In a sense, I agree. Theoretically, you never want to force your players to perform something they cannot do -- much less, perform something someone else on the team can do better. Which leads me to the idea that if your S4 is your best 4 player then it may be on the coach's creativity to not play a 3-and-D player as a S4. Including an S4 shouldn't be difficult -- you subtract one of your forwards for either a role playing forward or another guard. Perhaps the coaches who were on the panel are former coaches because they never adapted to the S4 concept of play.

Someone asked if you could teach basketball IQ. From a neuroscience point of view, I personally know you cannot teach basketball IQ just like you cannot teach cooking IQ. You cannot just implicitly learn something and then be a complete genius when you start doing the explicit task. As Del Negro mentioned: basketball IQ is not necessarily the most valuable thing but rather basketball experience will get you much further in your career.

From the analytics side, there was no real analytics presented. Analytics that were discussed were simply old-hat by this point of the conference.

Sports Science: Extending the Athlete's Peak Performance

As a note, Chistie Aschwanden (538) was on this panel and barely said a word. Perhaps it was her laid back demeanor or perhaps it was the commanding personalities of Erik Korem (U of Kentucky) and David Martin (76ers). Either way, I was really looking forward to Aschwanden's points but I felt they may have been overshadowed. Aschwanden is maybe one of the most well-respected science journalists in the world!

One important point that was brought up very early in this panel was the coach-scientist relationship. All teams now have some sort of scientist or team of scientists on their team. It has gone nearly-unmentioned up to this point in the conference where someone explicitly mentions the "coach-scientist relationship". This relationship may be fractured due to poor transparency from either the scientist or the coach. Increased transparency (e.g. how data will be analyzed and used, who will have access to the data, what is the intent of a specific test or measurement, etc) also increases the working relationship between the coach and the scientist.

Aschwanden makes a point about fatigue -- the best report of fatigue is still self-report from the athlete. This presents a common bias where a player may say their fatigue level is lower (e.g. closer to typical) than they are truly reporting. In this case, extended or greater fatigue can lead to a greater chance for injury. She doesn't explicitly call for action, but throughout the conference it is known that fatigue and rest research are both needed in order to inform athletes about injury probability.

Martin says some pretty harrowing statements for sports scientists. People don't reject sports science, people reject the sports scientist. Indicating transparency is needed before sports industry begins to accept sports scientists as a part of the sports community. Tech doesn't drive doping, incentives and regulations drive doping. In reference to the Tour de France, when doping goes unregulated and there is a market for winners, the emergence of doping increases. But technology itself isn't a main factor in doping; people will find the minimum threshold in order to increase their odds of winning without raising suspicion. Aschwanden contends that the most advanced analytics out there are within doping. The innovations of doping and how far athletes, coaches/trainers, or scientists will go to obtain new enhancements are far beyond the drive of current sports scientists who do not work with doping.


Leave a Reply