Chatbots are a technology that has a lot of potential to allow humans to better communicate with our future robot overlords. I’ve never had much success communicating with these artificial intelligences as most times it just shares links to other resources on a website which I would’ve already looked at when I need more specific help to solve my problem or request.
When used properly though, chatbots are becoming increasingly similar to human communication. We may still have a long way to go before we can communicate with AI in the way that sci-fi makes us believe would be pretty cool, but progress is being made. A new trademark application in the US reveals that Microsoft is ready for the next step in chatbot realism, as it is creating AI programs that can mimic the voice and speech patterns of an actual person.
The technology would allow the company to gather information about a person from social media, images along with receiving specific voice data and text messages to identify how they talk and what type of topics they would likely be interested in. This would then allow the chatbots to form conversations that match the individual as closely as possible.
The specific person may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity, etc.
Through this new approach, Microsft will be able to imitate the voice of a celebrity or perhaps, a historical figure or even a deceased loved one. It’s easy to see how this would appeal to many people who want to feel as if they are talking with specific people they’ve always dreamed of, even if it also comes across as a little creepy and like a Black Mirror episode in the making.
It’s impressive technology and different from Amazon’s approach with their different celebrity voices and Samuel L. Jackson, as most of those commands are pre-recorded whereas this approach would literally see AI try and replicate a person’s speech and personality more specifically.
It’s similar to many of the deep fake technologies we’ve seen which puts people’s faces and voice into other videos. With that technology proving hugely problematic, its perhaps easy to see how this could go easily wrong with people using the technology to falsely claim that people said certain things.
If Microsoft does actually intend on building this technology, maybe they can find some way of circumventing those potential problems.
Last Updated: January 26, 2021