The technology giant Google announced in March that it added fall detection capabilities to the Pixel Watch, which uses sensors to determine if a user has taken a hard fall.
If the watch doesn’t sense a user’s movement for about 30 seconds, it vibrates, sounds an alarm, and displays prompts for a user to choose if they’re fine or need help. The watch notifies emergency services if no response is selected after one minute.
In the first part of our two-part series, Edward Shi, product manager on the Android and Pixel personal security team at Google, and Paras Unadkat, product manager and Fitbit’s wearable health/fitness sensor and machine learning product lead at Google fell by MobiHealthNews to discuss the steps they and their teams took to create the Pixel’s drop detection technology.
MobiHealthNews: Can you walk me through the fall detection development process?
Best Unadkat: It was definitely a long journey. We started this a few years ago and the first thing was how we think about collecting a set of data and just understanding traffic from a motion sensor perspective. What does an autumn look like?
So, to do this, we consulted quite a large number of experts who worked in several different university laboratories in different countries. We kind of consulted on what the mechanics of a fall are. What are biomechanics? What does the human body look like? What do the reactions look like when someone falls?
We collected a lot of data in controlled environments, like induced falls, having people strapped into harnesses and just, like, losing balance events happening and just seeing what it looked like. So that kind of started us off.
And we were able to start that process, building that initial database to really understand what the falls look like and really break down how we actually think about discovering and kind of analyzing the fall data.
We also started a big data collection effort over many years, and that was collecting sensor data of people doing other non-falling activities. The key is to distinguish what is a slump and what is not a slump.
And then we also, in the process of developing this, we had to figure out how do the ways that we can prove this work? So one thing we did is we went down to Los Angeles, and we worked with a stunt team, and we just had a bunch of people take our finished product, test it, and basically use it to prove that all these different activities that people were participating in on the decline.
And they were trained professionals, so they weren’t hurting themselves to do it. We were able to discover all these different kinds of things. It was really interesting to watch.
MHN: So you worked with stunt performers to see how the sensors worked?
Unadkat: Yes, we did. So we had many different types of fall that we had people do and simulate. And, in addition to the rest of the data that we collected, that kind of gave us this kind of validation that we were actually able to see this thing working in kind of real-world situations.
MHN: How can it tell the difference between someone who is playing with their child on the floor and hits their hand on the ground, or something similar, and actually falls significantly?
Unadkat: So there are a few different ways to do this. We use sensor fusion between several different types of sensors in the device, including actually the barometer, which can actually indicate the change in altitude. So when you fall, you go from a certain level to another level, and then to the ground.
We can also detect when a person has been stationary and lying there for a certain amount of time. So that kind of feeds into our production, like, okay, this person was moving, and all of a sudden they had a strong impact and they weren’t moving anymore. They probably fell hard and probably needed help.
We also collected huge data sets of people doing this kind of what we were talking about, like, free living activities throughout the day, not taking falls, we add that to our machine learning model from these massive pipelines that we’ve built to take all that data and analyze it all. And that, along with the other dataset of actual hard, high-impact falls, we’re actually able to use that to distinguish those types of events.
MHN: Is the Pixel constantly collecting data for Google to see how it’s doing in the real world to improve it?
Unadkat: We have an option that is opt-in for future users where you know, if they choose, when they receive a drop alert, for us to receive data from their devices. We will be able to take that data, incorporate it into our model and improve the model over time. But it’s something that, as a user, you’re going to have to manually go in and type, “I want you to do this.”
MHN: But if people are doing it, then it will just keep getting better.
Unadkat: Yes, exactly. This is the ideal. But we are constantly trying to improve all these models. Even internally continuing to collect data, continuing to iterate and validate it, increasing the number of use cases we’re able to detect, increasing our overall coverage, and decreasing the type of false positive rates.
MHN: And Edward, what was your role in creating the fall detection capabilities?
Edward Shi: Working with Paras on all the hard work that he and his team have already done, basically, the Android Pixel security team that we have is really focused on making sure that users’ physical well-being is protected. And so there was a great synergy there. And one of the features that we had launched earlier was car collision detection.
And so, in many ways, they are very similar. When an emergency event is detected, in particular, a user may not be able to get help for themselves, depending on whether they are unconscious or not. How can we scale it? And then making sure, of course, to minimize false positives. In addition to all the work the Paras team had already done to make sure we were minimizing false positives, how could we, in experience, minimize that false positive rate?
So, for example, we check with the user. We have a countdown. We have haptics, and then we also have an alert sound, the whole UX, the user experience that we’ve designed there. And then, of course, when we actually call for emergency services, especially, if the user is unconscious, how do we relay the necessary information to an emergency call taker to be able to understand what’s going on, and then send the appropriate help to that user? And so this is the work that our team did.
And then we also worked with the emergency dispatch call centers to test what our flow would prove, hey, are we providing the necessary information for them to analyze? Do they understand the information? And would it be useful for them in a real fall event, and we made the call to the user?
MHN: What kind of information would you be able to collect from the watch to transmit to the emergency services?
Rain: Where we come into play is basically the whole algorithm has already done its nice job and says, “Okay, we’ve detected a severe crash. Then, in our user experience, we don’t make the call until we give the user a chance to cancel and say, ‘Hey, I’m fine.’
So when we make the call, we actually provide the context to say, hey, the Pixel Watch detected a potential hard fall. The user didn’t respond, so we’re able to share that context as well, and then that’s the location of the user in particular. So we keep it pretty concise because we know that concise and concise information is optimal for them. But if they have the context that the fall happened, and the user may have been unconscious, and the location, hopefully, they can send help to the user quickly.
MHN: How long did it take to develop?
Unadkat: I have been working on it for four years. Yes, it’s been a while. It started a while ago. And, you know, we’ve had initiatives within Google to understand the space, to collect data and things like that before, but with this initiative, it ended up being a little bit smaller and it started to scale up.
In the second part of our series, we’ll explore the challenges the teams faced during the development process and what future iterations of the Pixel Watch might look like.