ReCore World Full of Machine Learning ReCore, the most beautiful and unique sand swept world full of intelligent robots for #XBoxE3 was just announced to be released. ReCore is an adventure game from Microsoft and Mega Man creator Keiji Inafune. In this title the main character is Joule. Joule, the one with a strong resemblance to the unit of energy in the International System of Units, seems to have a very strong connection to Mac - a robot-dog-character. Both are a team trying to bring back humanity to this plant.
This doesn’t sit well with a whole other faction of robots who want this to happen for their own means. The scenery of this game changes after a strong and long lasting sandstorm. This has some serious effect of the map and its gameplay. This world inhabits very few humans and Joule. Her companion is a robot-dog. In this title the main character Joule and Mac are on a journey and about other buddy robots as well as nasty competition. The games main focus is on how this girl survives in such an ungodly world. The relationship between the what appears to be a friendly and the aggressor robots becomes more volatile and deadlier as you go along and discover new areas. Thats where all the hidden goods and all the fun are. Mack was defeated after a deadly fight with other robot-spiders, his core however, remained in tact. That is where all the magic happens. Mack’s consciousness is contained within a gloriously blue sphere, serving as what we would refer to as his belly. He doesn’t need one but it makes it very lovable. Anyhow, his core can be obtained and implanted into another robot. Yes, any robot. A universal adapter for anything. You could consider it your smart phone with wee bit nicer interface. So Mack’s consciousness can be implanted into another form of a robot and will live on adapting to a different skeleton with different abilities over time. I would refer to this as a backend that extends the smart phones capabilities. Moving on..
This is a very interesting facet of the games and it makes you think… where could you take it from there? Cores inhabiting consciousness and knowledge that could be implanted into any possible THING with a common interface. We’ve heard that before, Star Wars was one of the first to develop an idea for a robot that is able to interface a lot of machinery, and vehicles. So R2D2 was the prototype, one might think. So know lets get creative and assume you take a random core and put it in to any machine, or you take that core to extend another cores functionality. You could basically build SkyNet all over again. Sorry got sidetracked. You’d be able to apply all machine learning subfields, such as supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and deep learning. You’d have to apply problem categories, such as Classification, Regression, Forecasting, and Optimization. Sounds awesome right?! You’d have to deal with the whole process cycle and a lot more. ReCore is obviously a feature rich game applying an epic story containing mystery and adventure. Robots play an important role in this game and a variety of robots lead to an even more feature rich experience. Imaging that, you can partner up with different robot to achieve your goals in all those unique adventures. This game displays a very possible future, our future. A future for mankind. This game explores a lot of different niches in robotic sciences to fill libraries of publications.
Lets talk science There are so many topics robotics cover, it is not only software but hardware, kinetics, etc. Most of which I can’t even grasp and won’t be discussed here. This robot discussions revolves around what Artificial Intelligence actually is. What is the science that makes machines think like people - think rational, act like people act rational? What does it mean to build an artificial intelligence? There are a couple schools of thought that go into this.
One school of thought is what we should really be doing is building machines that think like people, right. Intelligence is about thinking, and this is artificial. What's the natural intelligence--I guess that's us. So we want to build these machines that somehow go through the thinking processes that people do. There actually is real science behind that and it isn’t real Artificial Intelligence anymore. This is some mix of cognitive science and computational neuroscience really trying to understand the brain. So another school is that people at times have thought Artificial Intelligence should be, is we should be building machines that act like people. Okay, so we should say: who cares about how they think, they can think in some strange, alien, silicon way, but the action, the behaviour has to be like what we know from people. Which brings me to the conclusion, our requirement on what machines do is the that they achieve their goals optimally in a rational manner. What means rational? Rational means I have a level-headed decision, I don't get angry. Um... Skynet got a little angry. So building machines that don’t get angry should be on our TODO list. Rational has a very technical meaning. It means that you maximally achieve your pre-defined goals. So the input to an AI is a goal, and rationality means you achieve it in the best possible way. Rationality, only matters what you do. It doesn't matter the thought process you go through. There are different approaches to this challenge as well. So what about intelligence? I haven’t really touched this topic because intelligence is a tricky field with obstacles. The philosophers are still pretty undecided on how to resolve this problem. When they get back to us on what intelligence is, we'll just respond, that's great but we're working on rationality for now. So the question really is, how do I figure out which action is best? That has to do with the consequences of that action, the context of that action, are there adversaries. What does it mean to have a function that describes my goals? And what about expectations? Life is full of risks. It has to do with doing the right thing in kind of the right way, correct? So and whats up with brain? I mean we're engineers right. We want to build hard wired self automated coolers chatting with the client at the store. Some kind of
C-3PO-vending machine for convenient stores with a high IQ. So wouldn’t we be silly to disregard the only intelligent prototype we have - the brain? And you know human minds are not perfect decision-makers, but by and large they are not to bad either. But there is something we are lacking, and that is a fact, we aren’t very rational in a lot of cases. Or maybe they are rational if you think about their goals in a different way. Since brains don’t appear to be modular and we actually really don’t have a very clear understanding of how it works. Which means it is hard to reengineer. Let me finish this part of the article with.. in order to make educated decisions, there are two ways. One is to have experience of past events. The other one is to decide what to do by simulating a possible outcome. Back to the game… Mack seems perfectly capable of make sense out of a situation in order to proceed using statistics. In short, some of its sophisticated software packages enables it to classify/distinguish between good vs bad. Like C-3PO. C-3PO is the protocol droid, human-cyborg relations in Star Wars. He is basically Google translate with anxiety. However, Mack seems to mimic emotions and affections for humans, too. And the creators did a good job so that we perceive him as lovable and less tin can. When we say that humans exhibit intelligence, we are not referring to their ability to recognise concepts, perceive objects, or execute complex motor skills, which they share with other animals like dogs and cats. Rather, we mean that they have the capacity to engage in multi-step reasoning, to understand the meaning of natural language, to design innovative artefacts, to generate novel plans that achieve goals, and even to reason about their own reasoning. Those type of activities are a multi sensor challenge and expect to be supervised in most cases. We perceive each other through language and vision. A robot can do that, too, but only if ones and zeros are perfectly aligned. You probably have a lot of this technology closer than you think, one is face recognition built into your cameras. Segmenting scenes into pieces, figuring out for a given image what it means, what's going on. Actually, it turns out that in addition to being able to do a bunch of cool things in vision with the image, one realization scientists made in cases like
autonomous driving and vision was that they don’t have to use the tools humans use. They spent a long time with vision just using one or two cameras slightly apart. But than XBox Kinect came along and opened a whole other door for scene segmentation, object recognition and image classification. Typically not found on a human body. Looking around, detecting outlines, actually doing detection. Identifying what the objects are, figuring out what the target is. In a more specific case for robots, image classification may be performed using supervised, unsupervised or semi-supervised learning techniques. Which would it be in Mack’s case? The only reasonable explanation is that all of its experience is stored in its blue and shiny core. He would need to reason at lightning speed and set all its mechanics in gear to move left, right, up, or down. Plus he would need to store positive events simultaneously. Apparently, I could go on and on about vision. But this field is just a subfield of perception. A very important aspect for a robot to move from one place to another is not just the fact to be able to avoid obstacles, it needs to be able to communicate with its extremities. We, when we are babies, need month to learn how to apply force and balance until we master all sorts of obstacles. You have to get the individual robot to simply walk along, and like find where the target is, do the vision and the calibration. All of those parts are just building blocks and are hard in detail not mention in the grant scheme of things. For instance, calibrate your position in an unknown territory and rethink your current strategy. The creators of ReCore go even one step further. They enable each core to be able to be implemented into an unknown host skeleton. To be able to move properly requires an understanding about the basic physical properties of the object. For instance, flying requires different movements than spider like robots. Lets talk walking, everyone in movie The Matrix was able to upload new knowledge to the system. This exercise fast forwarded lots and lots of hours of trail and error. There seems no physical connection between those robots and a Matrix like Nebuchadnezzar. The “real” world is different, though. Current robots have specific possibilities. It can receive and acts on it. Those actions go back into the environment and does what ever reaction comes back into the system.
To enable this processing you would want to have a robot function within the system. The system being the robot. That robot has no idea if its succeeding so thats out the door. That means the systems needs some sort of routine for expectations. You always only have a set of information on which you are supposed to act on. Each and every method requires a different set of skills and Artificial Intelligence techniques. That means a system like Mack needs to be able to solve each on of those problems choosing the correct technique by reflecting about what type of sensory inputs available. And how do we know everything is perfectly observed? So lets look at this from Mack’s perspective. As soon as you transfer the core into another robot body you don’t know anything about its morphology. You know that there are some nobs and controllers, possibly some actuators. What do you do when all you know is.. I don’t know anything? Imaging yourself sitting in a dark room trying to understand what you are into. Cornell University professor Hod Lipson demonstrates how a robot can teach itself to walk without any knowledge of its form and function. "Within a relatively small number of these babbling actions, it will figure out what it looks like," Lipson says. He adds that eventually "it can figure out how to move.” The system basically starts to be randomly moving. This data will be mapped with models that are typical robot models. To make that core-idea work means to store intrinsic models that will be mapped with the core’s experience. To combine cores and integrate them means to be able to communicate between the different cores so that both supplement each other. We’re talking internal and external transmissions just to do simple tasks. Simple for us. It takes a tremendous effort to get where ReCore is. I am guessing robotics research and expertise with motion planning and LIDAR/Time-of-flight/vision processing, coupled with calibrating cameras, model building from point clouds, data fusion for localization, object tracking, and getting a robot from here to there in the real world. We are talking about robot operation (both indoor and outdoor) and maintenance, whether it is debugging a sensor, chasing down a short circuit, or assembling and testing new electronics with high performance environment. I can’t wait to play the game and to see with what the creators came up with!