Trust is essential for people and organizations to use technology with confidence. At Microsoft, we strive to earn the trust of our customers, employees, communities, and partners by committing to privacy, security, the responsible use of AI, and transparency.
At Microsoft Research, we take on this challenge by creating and using state-of-the-art tools and technologies that support a proactive, integrated approach to security across all layers of the digital estate.
Threats to cybersecurity are constant and they continue to grow, impacting organizations and individuals everywhere. Attack tools are readily available and well-funded adversaries now have the capability to cause unprecedented harm. These threats help explain why U.S. President Joe Biden issued an executive order in 2021 calling for cybersecurity improvements. Similarly, the European Union recently called for stronger protection of its information and communication technology (ICT) supply chains.
Against that backdrop, Microsoft Research is focused on what comes next in security and privacy. New and emerging computing frontiers, like the metaverse and web3, will require consistent advances in identity, transparency and other security principles, in order to learn from the past and unlock these technologies’ potential. Developments in quantum computing and advances in machine learning and artificial intelligence offer great potential to advance science and the human condition. Our research aims to ensure that future breakthroughs come with robust safety and privacy protections, even as they accelerate profound changes and new business opportunities.
At Microsoft Research, we pursue ambitious projects to improve the privacy and security of everyone on the planet. This is the first blog post in a series exploring the work we do in privacy, security and cryptography. In future installments, we will dive deeper into the research challenges we are addressing, and the opportunities we see.
While the internet was not originally built with an identity layer, digital identities have grown to become foundational elements of today’s web and impact people’s lives even beyond the digital world. Our research is aimed at modernizing digital identities and building more robust, usable, private and secure user-centric identity systems, putting each of us in control of our own digital identities.
This work includes researching cryptographic algorithms that enable privacy-preserving open-source user-centric identity systems. Such systems would let people present cryptographically signed electronic claims and selectively choose which information they wish to disclose, while preventing tracking of people between presentations of the claim. Our approach would preserve an individual’s privacy and work with existing web protocols to provide easy and safe access to a wide range of resources and activities.
Our research also includes investigating innovative ways for people to manage their identity secrets reliably and safely without having to provide any centralized party with full access to them. Success in this area will also require scalable and verifiable methods to distribute identity public keys, so people can know who exactly they are interacting with.
Advances in graphics and machine learning algorithms have enabled the creation of easy-to-use tools for editing. While useful in many ways, this technology has also enabled fraud and manipulation of digital images and media – or deepfakes. Early fakes were easy to spot, but current versions are becoming nearly impossible for machines or people to detect. The potential proliferation of fakes that are indistinguishable from reality undermines society’s trust in everything we see and hear.
Rather than trying to detect fakes, Microsoft Research has developed technology to determine the source of any digital media and whether it has been altered. We do this by adding digitally signed manifests to video, audio or images. The source of these media objects might be well-known news organizations, governments or even individuals using apps on mobile devices.
Since media creation, distribution, and consumption are complex and involve many industries, Microsoft has helped standards organization to stipulate how these signatures are added to media objects. We are also working with news organizations such as the BBC, New York Times, and CBC to promote media provenance as a mitigation for misinformation on social media networks.
Hardware security foundations
To promote cyber-resilience, we are developing systems which can detect a cyberattack and safely shut down protecting data and blocking the attacker. The systems are designed to be repaired quickly and securely, if compromised. These systems are built with simple hardware features that provide very high levels of protection for repair and recovery modules. To enable reliable detection of compromised systems, we are also developing storage features that can be used to protect security event logs. This makes it harder for attackers to cover their tracks.
Modern-day computers and networks are under constant attack by hackers of all kinds. In this seemingly never-ending cat-and-mouse contest, securing and defending today’s global systems is a multi-billion-dollar enterprise. Managing the massive quantities of security data collected is increasingly challenging, which creates an urgent need for disruptive innovation in security analytics.
We are investigating a transformer-based approach to modeling and analyzing large-scale security data. Applying and tuning such models is a novel field of study that could change the game for security analytics.
Privacy-preserving machine learning
A privacy-preserving AI system should generalize so well that its behavior reveals no personal or sensitive details that may have been contained in the original data on which it was trained.
How close can we get to this ideal? Differential privacy can enable analysts to extract useful insights from datasets containing personal information even while strengthening privacy protections. This method introduces “statistical noise.” The noise is significant enough that AI models are prevented from compromising the privacy of any individual, but still provide accurate, useful research findings. Our recent results show that large language models can be particularly effective differentially private learners.
Another approach, federated learning, enables large models to be trained and fine-tuned on customers’ own devices to protect the privacy of their data, and to respect data boundaries and data-handling policies. At Microsoft Research, we are creating an orchestration infrastructure for developers to deploy cross-platform, cross-device federated learning solutions.
Protecting data in training or fine-tuning is just one piece of the puzzle. Whenever AI is used in a personalized context, it may unintentionally leak information about the target of the personalization. Therefore, we must be able to describe the threat model for a complete deployment of a system with AI components, rather than just a single part of it.
Read more about our work on these and other related topics in an earlier blog post.
Confidential computing has emerged as a practical solution to securing compute workloads in cloud environments, even from malicious cloud administrators. Azure already offers confidential computing environments in multiple regions, leveraging Trusted Execution Environments (TEEs) available in multiple hardware platforms.
Imagine if all computation were taking place in TEEs, where services would be able to access sensitive data only after they had been attested to perform specific tasks. This is not practical today and much research remains to be done. For example, there are no formal standards to even describe what a TEE is, what kind of programming interface a TEE cloud should have, or how different TEEs should interact.
Additionally, it is important to continuously improve the security guarantees of TEEs. For instance, understanding which side-channel attacks are truly realistic and developing countermeasures remains a major topic for research. Furthermore, we need to continue researching designs for confidential databases, confidential ledgers and confidential storage. Finally, even if we build both confidential computing and storage environments, how can we establish trust in the code that we want to run? As a cloud provider, our customers expect us to work continuously on improving the security of our infrastructure and the services that run on it.
In the future, we can imagine Azure customers compiling their software for special hardware with memory tagging capabilities, eliminating problems like buffer overflows for good. To detect compromise, VM memory snapshots could be inspected and studied with AI-powered tools. In the worst case, system security could always be bootstrapped from a minimal hardware root of trust. At Microsoft Research, we are taking a step further and asking how we can build the cloud from the ground up, with security in mind.
The advance of quantum computing presents many exciting potential opportunities. As a leader in both quantum computing development and cryptographic research, Microsoft has a responsibility to ensure that the groundbreaking innovations on the horizon don’t compromise classical (non-quantum) computing systems and information. Working across Microsoft, we are learning more about the weaknesses of classical cryptography and how to build new cryptographic systems strong enough to resist future attacks.
Our active participation in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography projects has allowed Microsoft Research to examine deeply how the change to quantum-resistant algorithms will impact Microsoft services and Microsoft customers. With over seven years of work in this area, Microsoft Research’s leadership in quantum cryptography will help customers prepare for the upcoming change of cryptographic algorithms.
We’ve joined with the University of Waterloo and others to build a platform for experimenting with the newly proposed cryptographic systems and applying them to real-world protocols and scenarios. We’ve implemented real-world tests of post-quantum cryptography, to learn how these new systems will work at scale and how we can deploy them quickly to protect network tunnels. Our specialized hardware implementations and cryptanalysis provide feedback to the new cryptosystems, which improves their performance, making post-quantum cryptosystems smaller and stronger.
Advances in cryptography are enabling end-to-end verifiable elections and risk-limiting audits for elections. Our open-source ElectionGuard project uses cryptography to confirm all votes have been correctly counted. Individual voters can see that their vote has been accurately recorded and anyone can check that all votes have been correctly tallied—yet individual ballots are kept secret. Risk-limiting audits use advanced statistical methods that can determine when an election audit has hit a pre-determined level of confidence with greater efficiency than traditional audits.
The cryptography tools that enable verifiable voting are Shamir Secret Sharing, Threshold Encryption, and additive Homomorphic Encryption. The math is interesting, and we will explore that in future blog posts, but there’s much more than math to ElectionGuard.
Securing the future
Through our work, we aim to continue to earn customer trust, striving to ensure that Microsoft’s products and services and our customer’s information will remain safe and secure for years to come.
Forthcoming entries in this blog series will include more details on the areas covered in this post and more. Much of our work is open-source and published, so we will be highlighting our GitHub projects and other ways you can interact directly with our work.
Have a question or topic that you would like to see us address in a future post? Please contact us!