Good Find Guru
Google’s DeepMind AI Now Capable of Rendering Scenes Google’s DeepMind AI Now Capable of Rendering Scenes
Google’s DeepMind neural network is now capable of rendering scenes. Not just any scenes, mind you, but complex ones. Using its neural networks and learning functions,... Google’s DeepMind AI Now Capable of Rendering Scenes

Google’s DeepMind neural network is now capable of rendering scenes. Not just any scenes, mind you, but complex ones. Using its neural networks and learning functions, DeepMind is capable of rendering hypothetical images it hasn’t seen before. While that might all sound rather abstract and hard to understand, it’s a huge leap for learning software. What exactly does this mean, and what effect will it have on AI going forward? 

Rendering Scenes 

The reason this is important, if somewhat boring-sounding at first, is that it represents a logical form of imagination. The AI is now capable of understanding a description of a geometrical scene, rendering it, and then rendering it from angles it has neither been showed nor had described. This is something humans do already, and easily.  

So easily, in fact, that you’re likely overthinking what is being described. If you see an image on a car, you can assume that it has four wheels, whether or not you can see all four in the image. Similarly, you can intuit that the pavement behind the car in the image is still there. You can even guess that there are seats inside the car, as well as a steering wheel and a radio.  

DeepMind’s New Functionality is a Game-Changer 

This is the AI equivalent of imagination. An AI capable of understanding spatial scenes and making predictions based on limited data is a quantum leap forward. What’s more, the developers overseeing DeepMind didn’t anticipate this functionality.   

Ali Eslami, a Google team leader, had this to say in a phone interview with Ars Technica. “One of the most surprising results [was] when we saw it could do things like perspective and occlusion and lighting and shadows. We know how to write renderers and graphics engines.” However, the most compelling thing Eslami found that the laws of physics represented were discovered by the software. The software was said to be a “blank slate,” and it was able to “effectively discover these rules by looking at images.” 

We’re living in an exciting era. AI advancements have been coming faster and faster, and soon we may even see fully aware learning software. This is both exhilarating and terrifying.  

No comments so far.

Be first to leave comment below.