The NFL is continuous to crowdsource new methods to track head and helmet impacts throughout video games from data scientists and for the second straight yr the winner of its synthetic intelligence competitors comes from exterior the United States.
The NFL and Amazon Web Services awarded US$100,000 (RM418,400) in prizes for this yr’s competitors with the highest prize of US$50,000 (RM209,200) going to Kippei Matsuda from Osaka, Japan, the league introduced Friday.
The process for Matsuda and the remainder of the data scientists who took half was to use synthetic intelligence to create fashions that might detect helmet impacts from NFL recreation footage and determine the particular gamers concerned in these impacts.
NFL government vice chairman Jeff Miller, who oversees well being and security, mentioned the league began manually monitoring helmet impacts for a small variety of video games a number of years in the past.
The tedious process of monitoring each helmet collision, particularly alongside the road of scrimmage, made it tough to do greater than only a small sampling of video games because the league tried to collect extra data on head impacts.
By sharing recreation movie and data with the data science community, the league is hoping to proceed growing higher techniques that may track these impacts extra effectively. The league estimates Matsuda’s profitable system might detect and track helmet impacts with better accuracy and 83 instances quicker than an individual working manually.
“There were certainly any number of domestic participants too, but the data science community is large and looking for solutions in places or with communities you wouldn’t normally talk to may end up being a pretty fruitful exercise,” Miller mentioned. “So I think we’ve proven that this model of working with the global data science community is helpful to us and will continue to be and we’ll continue to engage in.”
The first yr of the competitors in 2020 centered on fashions that detected all helmet impacts from NFL recreation footage. That competitors was received by Dmytro Poplavskiy from Brisbane, Australia, which included almost 7,800 submissions from 55 international locations.
This yr’s competitors was centered extra on particular participant impacts and included 825 groups and 1,028 opponents from 65 international locations, and a complete of 12,600 submissions.
“This was the most exciting competition I’ve ever experienced,” Matsuda mentioned in a press release. “It’s a very common task for computer vision to detect 2D images, but this challenge required us to consider higher dimensional data such as the 3D location of players on the field. NFL videos are also fun to watch, which is very important since we need to see the data again and again during competition. I would be honored if my AI can help improve the safety of NFL players.”
Miller mentioned the aim of the league is to create a “digital athlete” that may grow to be a digital illustration of the actions, actions and impacts an NFL participant experiences on the sphere throughout a recreation and can be utilized to help predict and hopefully forestall damage sooner or later.
“That is novel for us and obviously has great importance in how we think about making the game safer for the athletes,” Miller mentioned. “It will have an effect on training and coaching, certainly. It will have an effect in rules without a doubt. It will definitely have an impact in terms of equipment, and benefits that we can see from equipment because now for the first time we’ll have a pretty good appreciation for every time somebody hits their head during the course of an NFL game, and therefore, we will look for ways to prevent many of those.”
Priya Ponnapalli, senior supervisor with Amazon’s Machine Learning Solutions Lab mentioned the potential for machine studying to analyse previous data but additionally make forward-looking projections can be useful sooner or later in serving to create a digital model of gamers in any respect positions and analyse the varieties of hits they take.
“Machine learning is a very intuitive process and you get to a certain level of performance, and in this case we’ve got some pretty accurate and comprehensive models,” Ponnapalli mentioned. “And as we collect more data, these models are going to get better and better.” – AP