客户服务 金融和法律 酒店服务业 IT 品牌 电子商务 保健 航空公司 物联网 教育 政府 保险

聊天机器人 自然语言处理 机器人流程自动化 文本到语音转换 语音转文本 WhatsApp 插件 频道/渠道

博客 新闻 公告 媒体工具包 GDPR 视频课程

How long until digital laws clamp down on AI bots and social media?

新闻 05/11/2017

When a bot says jump!

In the current political climate, we see nations around the world cracking down on social media users and politicians calling for control. In some countries, they just ban the service, but in the west, soon every social media account should be verified as human. Regular tests would look to trick and expose bots, while reporting and bans will get more aggressive than Facebook or Twitter currently operate. 

While 99% of businesses and any company that provides bot services play by the book, it only takes one fraud, con or serious mistake to create a bad experience that can generate headlines. 

With that in mind, it won’t be long before even the “freedom-loving” democracies look to legislate how chatbots and other automated interactions work. For now, the focus is on a Blade Runner-type rule where AI is not allowed to pretend to be human. This is simple enough for any legitimate business to enable (hi, I’m a chatbot), but others might be less forthcoming.

 

What happens when bots go wrong?

The big question for the IT and bot industry, is what happens when an AI bot, due to poor scripting, gives some bad advice (legal or medical), or provides information that isn’t clear and could cause an accident, an overdose or other issue? While the advice might be sound, lack of context or checking could see someone misinterpret it, or the original information could be from an invalid or unverified source. 

We see it today with GPS. The onus is on the user to make sure they follow the rules of the road, not the voice of their digital guide. So, similar rules could be reinforced or expanded to people to make sure they double check any AI-provided advice, especially in critical situations. Yet, a few times a year, some idiot drives into a lake. 

Hopefully with some common sense, humanity will make it through the chatbot revolution. But, what happens in 10 years when a generation has grown up used to blindly following the advice of a digital doctor or manufacturer’s support bot? At that point, being regularly led or advised by our new digital overlords may create new risks.

 

Do we need regulation?

Any enterprise these days has a huge playbook of rules when it comes to digital content and smaller startups rapidly learn that following best practices is a good way to avoid getting into trouble. 

Between the two, most business should be able to set up a free chatbot using a rules-based service, with millions of them rolling out around the world in 2018. When looking for a chatbot, check that, like SnatchBot for example, it “is built with robust administrative features and uses high-grade security that complies with all regulatory mandates, so data is secure and safe.” By using a reliable platform with a high level of integrity, companies can help set a good example and most will ensure they play by any local rules or laws. 

Based on statistics as the number of bots grow, we’ll soon find out where the pitfalls lie as adoption increases, and users become more aware of the benefits and risks. 

AI-based chatbots pose a different type of challenge as these can find information from various sources, take user input and process it in various ways. So, when it comes to a future of AI-bots, then building Asimov-style rules of robotics into any code to prevent giving the wrong information should help. 

Something like “a bot may not give advice that could lead to a human being or living creature being injured or harmed, or to perform an illegal action!” would be a high-level view of the guidelines. Certain verticals like banking, law and medical will have their own specific rules grafted into the AI of any future bot.