If news, polls and investment figures are any indication, Artificial Intelligence and Machine Learning will soon become an inherent part of everything we do in our daily lives.
Backing up the argument are a slew of innovations and breakthroughs that have brought the power and efficiency of AI into various fields including medicine, shopping, finance, news, fighting crime and more.
But the explosion of AI has also highlighted the fact that while machines will plug some of the holes human-led efforts leave behind, they will bring disruptive changes and give rise to new problems that can challenge the economical, legal and ethical fabric of our societies.
Here are four issues that need Artificial Intelligence companies need to address as the technology evolves and invades even more domains.
Automation has been eating away at manufacturing jobs for decades. Huge leaps in AI have accelerated this process dramatically and propagated it to other domains previously imagined to remain indefinitely in the monopoly of human intelligence
From driving trucks to writing news and performing accounting tasks, AI algorithms are threatening middle class jobs like never before. They might set their eyes on other areas as well, such as replacing doctors, lawyers or even the president.
It’s also true that the AI revolution will create plenty of new data science, machine learning, engineering and IT job positions to develop and maintain the systems and software that will be running those AI algorithms. But the problem is that, for the most part, the people who are losing their jobs don’t have the skill sets to fill the vacant posts, creating an expanding vacuum of tech talent and a growing deluge of unemployed — and disenchanted — population. Some tech leaders are even getting ready for the day the pitchforks come knocking at their doors.
In order to prevent things from running out of control, the tech industry has a responsibility to help the society to adapt to the major shift that is overcoming the socio-economic landscape and smoothly transition toward a future where robots will be occupying more and more jobs.
Teaching new tech skills to people who are losing or might lose their jobs to AI in the future can complement the efforts. In tandem, tech companies can employ rising trends such as cognitive computing and natural language generation and processing to help break down the complexity of tasks and lower the bar for entry into tech jobs, making them available to more people.
In the long run governments and corporations must consider initiatives such as Universal Basic Income (UBI), unconditional monthly or yearly payments to all citizens, as we slowly inch toward the day where all work will be carried out by robots.
As has been proven on several accounts in the past years, AI can be just as — or even more — biased than humans.
Machine Learning, the popular branch of AI that is behind face recognition algorithms, product suggestions, advertising engines, and much more, depends on data to train and hone its algorithms.
The problem is, if the information trainers feed to these algorithms is unbalanced, the system will eventually adopt the covert and overt biases that those data sets contain. And at present, the AI industry is suffering from diversity troubles that some label the “white guy problem,” or largely dominated by white males.
This is the reason why an AI-judged beauty contest turned out to award mostly white candidates, a name-ranking algorithm ended up favoring white-sounding names, and advertising algorithms preferred to show high-paying job ads to male visitors.
Another problem that caused much controversy in the past year was the “filter bubble” phenomenon that was seen in Facebook and other social media that tailored content to the biases and preferences of users, effectively shutting them out from other viewpoints and realities that were out there.
While for the moment much of the cases can be shrugged off as innocent mistakes and humorous flaws, some major changes need to be made if AI will be put in charge of more critical tasks, such as determining the fate of a defendant in court. Safeguards also have to be put in place to prevent any single organization or company to skew the behavior of an ML algorithm in its favor by manipulating the data.
This can be achieved by promoting transparency and openness in algorithmic datasets. Shared data repositories that are not owned by any single entity and can be vetted and audited by independent bodies can help move toward this goal.
Who’s to blame when a software or hardware malfunctions? Before AI, it was relatively easy to determine whether an incident was the result of the actions of a user, developer or manufacturer.
But in the era of AI-driven technologies, the lines are not as clearcut.
ML algorithms figure out for themselves how to react to events, and while data gives them context, not even the developers of those algorithms can explain every single scenario and decision that their product makes.
This can become an issue when AI algorithms start making critical decisions such as when a self-driving car has to choose between the life of a passenger and a pedestrian.
Extrapolating from that, there are many other conceivable scenarios where determining culpability and accountability will become difficult, such as when an AI-driven drug infusion system or robotic surgery machine harms a patient.
When the boundaries of responsibility are blurred between the user, developer, and data trainer, every involved party can lay the blame on someone else. Therefore, new regulations must be put in place to clearly predict and address legal issues that will surround AI in the near future.
AI and ML feed on data — reams of it — and companies that center their business around the technology will grow a penchant for collecting user data, with or without the latter’s consent, in order to make their services more targeted and efficient.
In the hunt for more and more data, companies may trek into uncharted territory and cross privacy boundaries. Such was the case of a retail store that found out about a teenage girl’s secret pregnancy, and the more recent case of UK National Health Service’s patient data sharing program with Google’s DeepMind, a move that was supposedly aimed at improving disease prediction.
There’s also the issue of bad actors, of both governmental and non-governmental nature, that might put AI and ML to ill use. A very effective Russian face recognition app rolled out last year proved to be a potential tool for oppressive regimes seeking to identify and crack down on dissidents and protestors. Another ML algorithm proved to be effective at peeking behind masked images and blurred pictures.
Other implementations of AI and ML are making it possible to impersonate people by imitating their handwriting, voice and conversation style, an unprecedented power that can come in handy in a number of dark scenarios.
Unless companies developing and using AI technology regulate their information collection and sharing practices and take necessary steps to anonymize and protect user data, they’ll end up causing harm than good to users. The use and availability of the technology must also be revised and regulated in a way to prevent or minimize ill use.
Users should also become more sensible about what they share with companies or post on the Internet. We’re living in an era where privacy is becoming a commodity, and AI isn’t making it any better.
The future of Artificial Intelligence
There are benefits and dark sides to every disruptive technology, and AI is no exception to the rule. What is important is that we identify the challenges that lay before us and acknowledge our responsibility to make sure that we can take full advantage of the benefits while minimizing the tradeoffs.
The robots are coming. Let’s make sure they come in peace.
This post is part of our contributor series. It is written and published independently of TNW.