The AI community is so enthralled by the science in this age of discovery that it has not yet stopped to examine the risks from who controls the power and what they do with it. It is easy to write off the dangers as the dystopian delusions of a science-fiction obsessed Skynet or WestWorld fanatic, and while those may be far-fetched, there are more likely scenarios where the absence of clear ethics and regulations in Artificial Intelligence can create tiered societies, powerful monopolies, unaccountable governments and abuses of power.
AI superhubs
The biggest multinational companies and governments are realising long-term strategies to control more AI talent than their competitors. AI hubs are forming – with China, US and UK being the top destinations – that act as magnets for worldwide AI talent, and while there is definitely an altruistic objective to foster the best minds and discover new potential applications, there is also a desire to control talent and limit access to competing companies or countries. These are not secrets: the country that hosts the capital of AI will be the next global superpower; China has a 50-year strategy to achieve this; the UK government is inviting global power brands to open AI innovation centres, such as Samsung’s recent example in Cambridge University; the UK, US and Canada all have tech visa programmes to encourage the best and brightest minds from around the world to move to the respective countries.
In an AI world, the currency is data. As consumers and citizens, we trade our data for convenience and cheaper services. The likes of Facebook, Google, Amazon, Netflix and others process that data to make decisions and influence everything from what we’ll like, what we’ll buy, what adverts we’ll see or which way we might vote. What if everything we can access, view or read is controlled by a few global elite? And how can small companies or emerging markets compete if they are priced out of that data pool? It is crazy that there are no ethical rules to regulate this market.
The danger of monopolies
On one hand, that’s why democratisation of access to AI is so important because the more organisations and entrepreneurs that can work with top level AI intellects the more we enable the positives of AI discovery to come to the fore and prevent monopolies forming. That’s what is so attractive about working for and with Brainpool.ai, because its model enables access to a pool of the top AI scientists worldwide in a way that means companies of any size can find the right talent to develop their AI projects.
Dangers of misuse
The most dystopian example is a government using sentiment analysis to score its citizens on their value to society and to determine what resources that citizen receives accordingly. It’s a terrifying example of how AI and individual data could be misused. But more prevalent are examples of inherent biases, which reinforce the need for stronger AI ethics that force companies to address inconsistencies. There are well-known examples of facial recognition software that did not recognise any black faces – not because of malevolent objectives, but simply because there wasn’t enough awareness or consideration of the need for diversity in the creating team to consider that it could be a problem that would arise and need to be solved. Likewise, AI has been used in drug development but when the drugs have gone to trial, it’s been discovered that they do not apply to ethnic groups. Again, it’s not malicious but rather an implicit bias and it would have been addressed had the creating team signed up to a code of ethics and had to run their approach through the rigours of regulation.
Worse still, some software – such as the “ethnicity detection” algorithm – promises to accurately look at images of people and determine their ethnic background, which is perilously close to being misused from the start. When an AI algorithm is racist by design, that’s when you can see the need for regulation with teeth that will stamp these practices out before they can take root.
This is also why there’s a need to have more minorities in AI academia and to facilitate their entry into industry. Again, this is where Brainpool’s model works as it can help to source a diverse pool of talent from around the world and help to place them in industry.
The Artificial Intelligence industry is at the splitting the atom stage, where the curiosity to discover if it can do it far outweighs any thoughts as to whether it should; but those same atomic scientists do it again the same way with the benefit of hindsight?
Ethical standards are a necessity
We cannot tiptoe around the issue of ethics, we have enough recent examples of misuses of data and either accidental or deliberately biased AI algorithms to give credence to calls for standards and ways to police them. AI scientists have a responsibility to consider how their discoveries could be used and how both their discoveries and the way they themselves work are open to misuse. Just as medicine has ethical standards to do no harm and internationally agreed regulations that look at best practice, research and drug discovery, we need to create the same for AI that can address bad practice and concentrate the focus on using AI as a force for good. Thankfully, we’re still early enough to address it.