AI - We See You
Empiric Empiric

AI - We See You

Published 15/06/2020

AI - We See You

What a difference a year makes. Or does it? 

This time last year at CogX, the biggest Global Leadership Summit and Festival of AI & Emerging Technology globally, people were sharing insights on the fast-evolving world of AI, discussing how its pace of growth was accelerating change in our world.

And accelerated it has. At a hurtling pace and far beyond our expectations as we are throw into the ‘new normal’. From the increase in daily Zoom meeting participants which rose from 10 million in December to 300 million in April, to the various track and trace apps that governments worldwide have quickly deployed, tech and more specifically AI has gained a stronger foothold than ever before.

But twelve months on, concerns remain the same and two interlinked topics feature high on the agenda; privacy and bias.


Privacy has always been a rumbling concern under the surface and never more so than now. Tech companies the world over, are on a sprint to build APIs and apps that trace those who come into contact with a COVID-19 carrier, and to collect data, often perceived as intrusive and unnecessary by the user. Governments in the UK, France, Norway, Singapore and Australia, to name just a few countries, have scrambled to launch them.

Professional services firm, Genpact, in a 2018 survey of more than 5,000 people in Australia, the US and the UK, found that nearly three quarters (71%) of their respondents claimed they did not want companies to use AI if it were to infringe on their privacy. That included technologies created to improve their customer experience.

The recently launched contact tracing app on the Isle of Wight corroborates these findings; what better way to test consumer sentiment than to roll out a data collecting app that is touted to save lives.

For the software to achieve its full potential and have a material impact on reducing COVID-19 cases, epidemiologists advise that at least 60% of the population (in one study 80%) needs to download and use the app. On the Isle of Wight initial downloads did not reach the 60% needed. And further investigation showed that some people had downloaded the app more than once whilst others came from mainland UK. Other countries have reported a similar lack of uptake.

The latest paper* to examine this issue comes from four authors of Imperial College’s faculty of medicine who found that of the 17.1% NHS users responding ‘no’ to a willingness in downloading app-based contact tracing, 67.2% stated that this was due to privacy concerns.

Detlef Nauck, Head of AI & Data Science Research, Applied Research Division, BT Group, during a panel debate at CogX confirmed this narrative. He explained that the UK has chosen to take a centralised approach to its collection and processing of data. This means that rather than keeping all the data on the individual devices and allowing the app to alert its user when they have come into contact with a COVID-19 carrier, the UK has the NHS collect the data centrally. This is believed to be better for mapping outbreaks and gaining control of the virus.

But despite the assurances of the government, perhaps more people would download the app if they felt in control of their data.

“It would have been better to go with a decentralised app”, said Detlef, “because then at least you can say there are no privacy concerns, as the data is on your device".


*Not yet peer reviewed. Belief of Previous COVID-19 Infection and Unclear Government Policy are Associated with Reduced Willingness to Participate in App-Based Contact Tracing: A UK-Wide Observational Study of 13,000 Patients [1]


It seems that it is not only governments facing challenges with public trust in AI; companies too have started to realise that without public uptake there is no quantifiable data.

Be it the embarrassing Microsoft AI picture mix-up of two Little Mix band singers in an article on racism (you couldn’t make it up), to the deadly driverless cars that do not ‘see’ black pedestrians, thus putting them at risk of being run over, it is clear that bias has been baked into tech. This exacerbates the lack of trust even further.

In the wake of global recognition of the Black Lives Matter movement following the indicted murder of George Floyd in the USA, this bias is now front of mind. On the same day only last week, that Jade Thirwell’s story of racism was illustrated with a picture of another woman of colour, band member Leigh-Anne Pinnock, IBM scraped its facial recognition software over racial profiling concerns. A day later Amazon imposed a one-year moratorium on police use of their facial recognition technology and on June 11th, Microsoft announced they would not sell facial recognition technology to the police.

When companies no longer trust their own AI products, why should the public? Especially those that were never included in the creation of these products in the first place.

The technology industry is still depressingly lacking in diversity. Software is coded by majority white, male engineers and their creations regurgitate their same world experience and thinking into the real world, reinforcing sexism and racism in the machine learning system.

We know that stepping away from the established stereotypes of similar, like-minded people with the same unconscious biases, delivers positive results time and time again. Take for example gender diversity, our largest gateway to numerous diverse groups, where the results speak volumes; we have higher financial performance, better team dynamics, higher productivity, increased innovation and better problem solving (Anita Borg Institute). And yet, here we still are.

We must do better. In the same way that the world turned to technology during the shutdown, AI solutions will be imperative in the recovery and we simply cannot afford to continue similitude of thinking, with the repercussions of systems that learn and amplify bias - bias breaks trust.

“The current pandemic has shown us more needs to be done to speed up the adoption of trusted AI around the world”, said Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum, where in a joint international effort, the UK released guidelines to help governments accelerate ‘trusted’ AI deployments with ethics in mind via the Procurement in a Box toolkit.[2]

One step in the right direction. AI we see you.


By Marie-Clare Fenech

Marie-Clare heads up the NextTechGirls partnership programme. She spent 23 years in Technology Recruitment and Executive Search as a Board Director and Operations & Business Development Head, and is heavily invested in the future of female talent. She was educated in Italy and graduated with a Bachelor's Honours degree in Psychology in the UK.

Enquire Now

Simply provide us your contact details and we will be in touch

Empiric is a dynamic technology and transformation recruitment agency specialising in data, digital, cloud, security and transformation. We supply technology and change recruitment services to businesses looking for both contract and permanent professionals.

Empiric are committed to changing the gender and diversity imbalance within the technology sector. In addition to Next Tech Girls we proactively target skilled professionals from minority groups which in turn can help you meet your own diversity commitments. Our active investment within the tech community allows us to engage with specific talent pools and deliver a short list of relevant and diverse candidates.

For more information contact 


To view our latest job opportunities click here.


Login to your Empiric account.

Forgot password?


Don't have an account yet?

Create an account now and get access to our online features.


This website uses cookies to ensure you get the best experience on our website