With the world battling the COVID-19 outbreak, the interest in technologies that enable connectivity, productivity and online communication has exploded. Millions of professionals are now working from home, attending virtual meetings and collaborating online – work processes have been transformed.
“The end users must always weigh up risks and returns, but society as a whole does not need to accept greater risks when there are technology solutions”
The changes have occurred throughout our lives and within just a few weeks, with online teaching provided by schools and universities, healthcare provided through telemedicine and exercising sessions online – not to mention the online grocery shopping. Suddenly, technology took center stage where it was only a supplementary tool. Trust in technology, and its trustworthiness became even more important.
One person who’s almost uniquely positioned to comment on the current landscape is Claire Vishik, CTO of GMT (Government, Markets, Trade) at Intel. Vishik’s job is to understand the connections between policy, regulations and standards across the world and the technology space, and these connections are essential to ensure the users’ acceptance of the digital economy.
One of the main issues that the coronavirus pandemic has highlighted is that of trust, not just in technology to work as users expect it to, but for organisations to trust the technology enough to allow their employees to work from home while lockdown measures have been in place.
Governments relied on tech companies to continue to provide services and the situations in their communities under lockdown. Industry stepped in beyond adapting its work environments for working from home models. Support and new technology solutions were provided in other areas, such as education. Intel, for instance, came up with a $50 million COVID response that contributes to areas such as online education and healthcare.
“It’s nothing short of amazing how much of the world has suddenly gone digital,” says Vishik. “Emergency reactions to the pandemic have demonstrated how much we depend on technology solutions. The fact that such a large proportion of the population of many countries started working from home, using technology for meetings and most of their activities, including grocery shopping and education, shows that there is general trust in technology.”
But the fight to control and suppress the virus has driven new developments that may require a reassessment of trust, privacy and security models. Applications such as contact tracing pose new questions to the privacy advocates, medical professionals, technologies, and regulators. These experts approached the answers differently in different countries and regions. For the future, a unified approach will be needed, to put pandemic control tools in motion early.
Over the past three decades the industry has created reasonably harmonised principles of trust when it comes to the use of connected devices, especially when we talk about technical trust. In this case, standardised approaches —for example developed by the Trusted Computing Group (TCG) — have been defined and broadly adopted. Most countries have developed cyber security strategies to protect users and assets in cyberspace.
Europe pioneered regulatory and technical approaches to privacy and data protection, and privacy principles and technologies were broadly used in the COVID-19 pandemic. In many countries, pandemic related portals use minimised or anonymised data for aggregated data sets, or anonymise data in queries. But broadly applicable technologies for contact tracing have yet to be created.
As the technologies will continue to evolve, new privacy technologies will need to be developed. Intel has been working to extend privacy enhancing technologies to new areas, such as artificial intelligence. Technologies that protect privacy for AI already exist, for examples homomorphic encryption or federated machine learning. These technologies are already used in a variety of healthcare settings, especially to support research.
“I think it's the responsibility of the technologists and governments to do everything in their power to minimise risks,” says Vishik. “The end users must always weigh up risks, and the society as a whole needs to understand and mitigate risks in technology solutions.”
In times like the COVID-19 pandemic, private-public collaboration that is already strong in areas of security and privacy, has increased more. Vishik says she hasn’t seen a significant negative escalation in this area during the coronavirus pandemic, and industry continued to advise government on measures to take for protecting data security and users’ privacy. “If anything, the direction was positive because many immediately actionable problems that need urgent solutions,” she says. For the governments, it's a matter of improving their understanding of how the technologies work, while companies must understand government use cases, analyse unintended consequences early and have strong standards-based processes for secure development lifecycle and privacy features in their products.
COVID-19 is a global problem, different countries and regions have developed their own strategies for security, privacy and data protection. For instance, in the European Union and the UK, privacy and data protection are principles-based, whereas in the US, the federal regulations are focusing on specific needs of market sectors, for example healthcare or finance. While the approaches to privacy and data protection regulations are different, the strategic goals are aligned, in many cases. We have a global economy, and we are using the same technologies. As a result, in many cases, there is already a good level of harmonisation via international standards, multi-stakeholder efforts, and research and development that makes it possible for researchers and regulators to work together on many issues.
But what about after the pandemic has been controlled and life returns to something resembling normal? “It's clear to everyone that data has become a foundation of emerging technologies,” says Vishik. “I think there will be a lot of new use cases associated with new ways to process data, in areas such as artificial intelligence or smart cities. A new computing paradigm will likely emerge, with increased computing power in devices and platforms; and at the edge of the network with almost unlimited bandwidth; and advances in robotics that will have far-reaching effects on how society operates.”
This proliferation of new technologies and use models will raise more questions around trustworthiness. Companies will have to demonstrate that their new systems, including artificial intelligence environments, can protect users’ privacy and are using data in an ethical way. For AI, it will be expected to follow the future principles of Responsible AI that is inclusive, free of bias, with transparent and explainable methodologies.
*Other names and brands may be claimed as the property of others