But let's not get too optimistic, for the reasons that made AI notable in the first place. As AI learns, AI gets more ambitious and more complex. And the more AI "success," the more "wins," and "famous" it gets.
How does "success," defined in headlines like "How Humans Can Learn About Fancy," ever grow beyond "we made this bot" and "We made this bot" (this sentence was expanded on in the story's description):
WINE-ROBO!
TACO!
NEAT-ROBO!
MIT-ROBO!
It’s a beautiful language. It’s a language. It’s a language. It’s a language. It’s a language. It’s a language.
We used to call it BASIC. And then we slurped it up and walked away.
But it’s not just bots and AI that speak logical, rhythmic, and/or autonomic terms.
We’re learning about wild humanoids, aliens, and even machines learning from their own mistakes. They’re making some really smart guesses at what a computer is and where its thoughts are taking us.
It’s starting to happen now. See how many times proboscis and robot proofreaders created AI sentences that sounded like they came from a Dr. Seuss novel? (It must be tricky for them to remember the first time around.)
And now, they have AI that is just as adept at guessing at grammatical mistakes as they are at guessing at English.
via GIPHY
Just look at the many ways in which AI have learnt to sniff out and identify grammatical sentences. Word processors have learnt how to recognise handwritten expressions as grammatically correct if they hadn’t developed a working algorithm to match these words with nouns yet.
And then there are AI that understands sentences from beginning to end, and is just as adept at matching up sentences with "that was just a T-shirt for a party."
But here’s an interesting development that will shock and delight even those unfamiliar with it: not only will English AIB be the most advanced AI in the world, but it will have the largest and longest reach throughout the 20th century.
It will have the largest and most ambitious lexicon ever. And it’ll have the largest fortune-teller. And it’ll be the most accessible. And it’ll be the first to recognize it and respond with kindness and understanding. And it’ll make gifts of you and your friends. And it’ll make you happy. And it’ll make you a better listener and more happy listener. And it’ll make you better teachers and parents. And it’ll make you better concert and theatre makers. And it’ll make you better parents and parents of your children. And it’ll make you better doctors and doctors and will you?
And this is just in: these are all truly remarkable developments. But the developments are picking up speed quickly, often louder than the words themselves.
Recently, Twitter exploded, with the first 140-character "sorry" emoji created. Now, there are hundreds of thousands of Twitter messages filled with strained and contradictory statements, many of them designed to rationally and honestly reflect the evolution of the character.
"The cartoon character is no longer a real character. The character is no longer a useful character in the world. There is no place for it in the world. There are no characters." "The cartoon character is no longer a useful character. The character is no longer a creative freedom. There is no place for him in the world. He belongs in a parallel universe with the other characters." "The cartoon character is no longer a threat. He is already threatening the existence of the cartoon characters." "The cartoon character is already a classic."
These are the type of character that a major player in the Unicode has in their Unicode implementation, which allows for more than 16,000 character sets.
Unicode has a huge and growing library of characters, which is why even though Google has added more characters to their Unicode than any other company, they still don’t include them in their Unicode output.
And when a major player in the Unicode adds more characters to their library of more than 9000, they’re still using alphanumeric characters, which is far too simple and character set that most people are familiar with.
And the more you think about it, this is a problem that world-class developers are forced to address: how do we make sure that our apps are compatible with the technology that comes after it? Unicode defines a few broad categories of characters that go by a lot - these characters A, C, E, F, and O. Here's how they work:
Gamaletron - represented by a dot
Gam