When developers build applications, we ask ourselves, “how can we help people understand how to use this?” But with artificial intelligence becoming more advanced, we’ve begun flipping this question on its head; now, we seek to make our applications understand our users.
As a way to explore this question, we've created a Photo Hunt game on Twitter.
Help us test it out.
Recent AI advances make it possible for the tech to understand what the user sees. For example, search for “drooling dog” in Google Photos, and it will know.
The tools that make these natural interactions possible are now available to all developers.
Google recently released TensorFlow, a library that can be used to train artificial neural networks (ANN) combined with a pretrained network that can recognize 1,000 different nouns from the ImageNet challenge. If you give it a photo of a French Loaf, it will tell you it’s a French Loaf.
Our question: How will this change how we design and build digital products?
Using this technology, we’ve built the Photo Hunt experiment. It’s a simple game that challenges you to find as many objects from our list. If you're the first person to find an object, you’ll get credit.
Our demo is an example of how tools like TensorFlow can empower developers and designers to build applications that understand more open-ended inputs, like photos. AI is no longer reserved for the Googles and Microsofts of the world. It’s still not easy, but it’s now possible for everyone to start training and deploying applications that understand our users.
The best way to get a handle on what these advances mean for application design and development is to start using them.