Microsoft has been at the forefront of the machine-learning craze, having developed demos that are both neat and useful. About a year ago, the company released a tool that took a guess at your age, and just this past fall, it created a tool to help identify your emotions. The company has even developed a solution that can give a computer a sense of humor. There's seemingly no limit to where machine-learning can take us.
Not all in this machine-learning realm can be perfect, though. Just last month, Microsoft had to pull a Twitter bot it called "Tay" from the internet after people managed to manipulate its "thoughts" for the worse. In no time, Tay became racist and sex-crazed, and we're pretty confident that Microsoft didn't see that coming. Given the technology at play, no one could realistically blame Microsoft for Tay going off-the-rails, and in a way, it was a neat proof-of-concept.
Hot on the heels of the Tay debacle, Microsoft thought it'd attempt yet another bot. This one is simply called "Caption Bot", and attempts to caption a photo you provide. Again, machine-learning tech is behind all of this, and at the moment, the result is not so hot. As some have discovered, the bot refuses to acknowledge who a person is, which apparently includes Hitler.
The caption this bot gave this particular image is quite appropriate - it even got the emotion right. Caption Bot isn't picky about the people it ignores, though: it doesn't seem to reveal the identity of anyone. After providing a picture of Barack Obama, the result is the exact same, save for the detected emotion.
Without naming names, both descriptions above are accurate, although with further testing, it seems Caption Bot needs a lot more work. Check out what it thinks about the PlayStation 4:
If all suitcases looked like that, I doubt that many of us would traveling on a regular basis! But despite these stumbles, this is what machine-learning is all about. Computers are given datasets that they use to gradually improve their knowledge, so in a month, Caption Bot could very well give far more accurate answers. Let's hope so, because the results right now are laughable at best.