Artificial Intelligence (AI) has slowly begun to affect people’s lives by facilitating everyday tasks and activities from scheduling meetings, to selecting activities to responding to emails and driving cars.

Artificial Superintelligence (ASI) the omnipotent AI that significantly outperforms human counterparts in any area or task is often presented as dangerous in the media. In contrast an AI matching humans’ complete capabilities and ways of thinking is referred to as Artificial General Intelligence (AGI). However, despite the advances in technology, the forecasted 1000% investment growth over the next years, both ASI and AGI are still over 10 years from today’s reality.

 

The state of feasible AI today is known as Artificial Narrow Intelligence (ANI). ANI has had an explosion of interest, activities and applications – enabled predominately by exponential increases in available processing power and advances in gathering, storing and transferring data. This ANI can only complete a narrow clearly defined set of tasks typically within one application area and is unable to build on its capabilities to learn tasks outside this application area.

 

One AI solution working in and across media publishing out of the box is not available yet. An ANI can become as good or better than any employee in a single distinct process. Here at The Future Shapers we like many media start-ups are constantly looking to incorporate the technologies to make our processes and operations more efficient and effective.  Even as a small entity the reality for us is to keep a tight focus on what we can automate and what will have to wait for a more plug and play based solution.

 

Underlying the applications of ANI are a multitude of tools from data science – the most important being Machine Learning (ML). A Machine Learning algorithm learns the transformation rules to create a desired output based on a given input by itself.

 

The ML algorithm learns the transformation rules by interactively reducing the deviation between the created and desired output – a process referred to as training. While different forms of training exist, the most common form today is called Supervised Learning. This method requires large amounts of labelled data to train the algorithm. This means that parts of the data need to be tagged for the algorithm to know to identify them and use them as part of the designated process.  This process is typically achieved by a combination of automation and human interaction. There are also Unsupervised Learning and Reinforcement Learning ways to train an algorithm. Unsupervised Learning is the training process that uses unstructured data. Here the algorithm identifies concepts and derives outcomes entirely on its own. Reinforcement Learning is when an algorithm using unlabelled data and the output is rated by the human as correct or not. Overtime the algorithm will produce more correct answers as it improves its process capabilities. These algorithms are used in gaming risk and reward scenarios such as trading or betting sectors.

 

Here at the Future Shaping Media Company we are looking to use these technologies and approaches to solve the problem of financial sustainability for public interest news and rebalance the power away from the social media and search giants. We believe that giving publishers the ability to determine which single articles to charge for and at what price will be an increasingly powerful solution for the public interest news ecosystem including publishers, readers and journalists.

 

We are looking to use ML algorithms to enable editors to determine what they should charge at a unique individual reader level in an automated way that seamlessly supports a pay-per-read model.

 

Our strategic approach has echoes of the work of quantitative financial trading houses as we are looking to identify the relationship between a single piece of content, its meta-data and a reader’s behaviours and loyalties to a publisher. We believe this pricing / content insight and ability for publishers to protect and control their IP, coupled with the changing legislation for content copyright right presents a real opportunity to rebalance value back to producers of content.

 

ML algorithms have recently proved to be remarkably good at tackling problems where writing a list of instructions won’t work. They can recognise objects in pictures, understand words as we speak them and translate from one language to another – something rules-based algorithms have always struggled with. The downside is that if you let a machine figure out the solution for itself, the route it takes to get there often won’t make a lot of sense to a human observer. The insides can be a mystery, even to the smartest of living programmers.

 

In what AI and Robots experts call Moravec’s paradox, what machines are good at is where humans are weak, and vice versa. In 1988, the roboticist Hans Moravec wrote, “It is comparatively easy to make computers exhibit adult level performance of intelligence tests, and difficult or impossible to give them the skills of a one year- old when it comes to perception and mobility.”

 

Many of the most promising jobs today didn’t even exist twenty years ago, a trend that will continue and accelerate. Mobile app designer, 3D print engineer, drone pilot, social media manger, genetic counsellor – to name just a few of the careers that have appeared in recent years. And while experts will always be in demand, more intelligent machines are continually lowering the bar to creating with new technology. This means less training and retraining for those whose jobs are taken by robots, a virtuous cycle of freeing us from routine work and empowering us to use new technology productively.

 

Former Chess Champion and author Garry Kasparov has a positive outlook on how the future will play out “Machines that replace physical labour have allowed us to focus more on what makes us human: our minds. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty, and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer – or even playing chess.”

 

ML is one of the biggest drivers of AI technology at present. Algorithms within machine learning applications have been able to write code, play poker, and are being used in attempts to solve cancer. Yet, there is a bias problem. Using the popular GloVe algorithm, trained on around 840 billion words from the internet, three Princeton University academics have shown AI applications replicate the stereotypes shown in the human-generated data.

 

These prejudices related to both race and gender. Machine learning, the computer scientists write in a paper published in Science, “absorbs stereotyped biases” when learning language. The research used the Implicit Association Test (IAT) to determine where biases exist.  Since it was developed in the 1990s the test type has been used in psychological studies to determine human traits.

 

The Princeton scientists adapted the test to work with the GloVe algorithm and used text from the internet to experiment with word pairing. In every IAT completed, biases from human language were learned by the machine learning system. The findings of the study did not come as a massive surprise, as machine learning systems are only able to interpret the data they are trained upon.

 

In May 2016, ProPublica reported a software used across the US to predict future crimes and criminals was biased against African Americans, as the data used within it was not accurate. The work highlighted how prejudices and biases can easily be transferred is written up in the paper Semantics derived automatically from language corpora contain human-like biases. The authors noted “The main thing about the machines is that we don’t want to have an AI system that we train-up on present culture or two year-old culture and then freeze that,” adding that as AI systems develop alongside  culture, they should be continually updated and retrained on new data. The finding suggested that when building an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historic cultural associations, some of which can be objectionable.

 

One of the most pressing issues regarding AI is liability. While the developer or manufacturer of the AI can be held liable for defects that harm users, decisions made by the AI that are due to its interpretation of reality have no explicit liable party, according to current laws. The spectre of automation doing people out of jobs either in a blue- or white-collar capacity has been on the horizon for decades, and this has happened in many industries.  Although in advanced economies with relatively high levels of employment switching roles in a Homo adaptus manner may be seen to be easier. This new class of worker understands that it needs to be in state of constant evolution. Rather than resisting technological developments, it seeks to use them in order to thrive. The World Economic Forum’s Future of Jobs report cited the most valuable skills in the 21st century as critical thinking, problem-solving and creativity.

 

The current COVID-19 Pandemic may well have an impact on these trends as employment becomes increasing uncertain and management decision makers take the opportunity to accelerate investment in ML and AI solutions.  According to the Pew Research Centre, 86 percent of America internet users have taken steps to disguise their digital footprints, and 91 percent of them agree that consumers have lost control of the way their personal information is used by companies. Outside of the specifics of AI the work by EU regulators such as Margrethe Verstager reminds us, surveillance ultimately isn’t a good business model and history teaches us that bad business models eventually die.