Intel Innovation 2023: Accelerating the Convergence of AI and Security: US Pioneer Global VC DIFCHQ Singapore Swiss-Riyadh Norway Our Mind

Intel presents a software-defined, silicon-accelerated approach built on a foundation of openness, choice, trust and security.

NEWS HIGHLIGHTS

 

  • General availability of a new attestation service that is part of Intel® Trust Authority, which offers unified, independent assessment of Intel’s trusted execution environment (TEE) integrity and policy enforcement, and audit records.
  • Collaborations with leading software vendors, including Red Hat, Canonical and SUSE, to provide Intel-optimized distributions to ensure developers have access to the hardware and software they need to scale performance.
  • Intel is joining the Linux Foundation’s newly formed Unified Acceleration Foundation and will contribute its oneAPI specification to help drive cross-platform development across multiple architectures.
  • Intel announced plans to develop an application specific integrated circuit (ASIC) accelerator to reduce the performance overhead associated with fully homomorphic encryption (FHE) and will release the beta version of an encrypted computing software toolkit for developers later this year.

 

 

SAN JOSE, Calif., Sept. 20, 2023 – During the second day of Intel Innovation 2023, Intel Chief Technology Officer Greg Lavender offered a detailed look at how Intel’s developer-first, open ecosystem philosophy is working to ensure the opportunities of artificial intelligence (AI) are accessible to all.

Developers eager to harness AI face challenges that impede widespread deployment of solutions for client and edge to data center and cloud. Intel is committed to addressing these challenges with a broad software-defined, silicon-accelerated approach that is grounded in openness, choice, trust and security. By delivering the tools that streamline development of secure AI applications and ease the investment required to maintain and scale those solutions, Intel is empowering developers to bring AI everywhere.

More: Intel Innovation 2023 (Press Kit)

“The developer community is the catalyst helping industries leverage AI to meet their diverse needs – both today and into the future,” Lavender said. “AI can and should be accessible to everyone to deploy responsibly. If developers are limited in their choice of hardware and software, the range of use cases for global-scale AI adoption will be constrained and likely limited in the societal value they are capable of delivering.”

Easing AI Deployment with Trust and Security

During the Innovation Day 2 keynote, Lavender highlighted Intel’s commitment to end-to-end security, including Intel® Transparent Supply Chain for verifying hardware and firmware integrity, and ensuring confidential computing to help protect sensitive data in memory. Today, Intel is expanding on its platform security and data integrity protection with several new tools and services, including general availability of a new attestation service.

This service is the first in a new portfolio of security software and services called Intel® Trust Authority. It offers a unified, independent assessment of trusted execution environment integrity and policy enforcement, and audit records, and it can be used anywhere Intel confidential computing is deployed, including multi-cloud, hybrid, on-premises and at the edge. Intel Trust Authority will also become an integral capability to enable confidential AI, helping ensure the trustworthiness of confidential computing environments in which sensitive intellectual property (IP) and data are processed in machine-learning applications, particularly inferencing on current and future generations of Intel® Xeon® processors.

AI is an engine of innovation with use cases across every industry, from healthcare and finance to e-commerce and agriculture.

“Our AI software strategy is founded on open ecosystems and open accelerated computing to deliver AI everywhere,” said Lavender. “There are endless opportunities to scale innovation and we are creating a level playing field for AI developers.”

An Open Ecosystem Facilitates Choice with Optimized Performance

Organizations around the world are using AI to accelerate scientific discovery, transform business and improve consumer services. However, the practical application of AI solutions is limited by challenges that are difficult for businesses to overcome, from a lack of in-house expertise and insufficient resources to properly manage the AI pipeline (including data prep and modeling) to costly proprietary platforms that are expensive and time-consuming to maintain on an ongoing basis.

Intel is committed to driving an open ecosystem that allows for ease of deployment across multiple architectures. This includes being a founding member of the Linux Foundation’s Unified Acceleration Foundation (UXL). This cross-industry group is committed to delivering an open accelerator software ecosystem to simplify development of applications for cross-platform deployment. UXL is an evolution of the oneAPI initiative. Intel’s oneAPI programming model allows for code to be written once and deployed across multiple computing architectures, including CPUs, GPUs, FPGAs and accelerators. Intel will contribute its oneAPI specification to the UXL Foundation to help drive cross-platform development across architectures.

Intel is also collaborating with leading software vendors Red Hat, Canonical and SUSE to provide Intel-optimized distributions of their enterprise software releases to help ensure optimized performance for the latest Intel architectures. Today at Innovation, Lavender was joined by Gunnar Hellekson, vice president and general manager for the Red Hat Enterprise Linux business, to announce an expanded collaboration that will see Intel contribute upstream support for the Red Hat Enterprise Linux (RHEL) ecosystem using CentOS Stream. Intel also continues to contribute to AI and machine-learning tools and frameworks, including PyTorch and TensorFlow.

To help developers scale performance quickly and easily, Intel Granulate is adding Auto Pilot for Kubernetes pod resource rightsizing. The capacity-optimization tool will offer automatic and continuous capacity management recommendations for Kubernetes users. This will enable them to reduce the investment required to meet their cost-performance metrics for containerized environments. Intel Granulate is also adding autonomous orchestration capabilities for Databricks workloads, which will deliver an average of 30% cost reduction and 23% processing time reduction with no code changes.1

As the world relies more on AI to solve large and complex problems and deliver real business outcomes, there is a growing need to protect AI models, data and the platforms on which they run from tampering, manipulation and theft. Fully homomorphic encryption (FHE) allows for compute calculations to be performed directly on encrypted data, even though practical implementations are limited by computational complexity and overhead.

Intel said it plans to develop an application-specific integrated circuit (ASIC) accelerator to reduce the million-fold performance overhead associated with a software-only FHE approach. In addition, the company will launch the beta version of an encrypted computing software toolkit, which will enable researchers, developers and user communities to learn and experiment with FHE coding. This will come later this year as part of Intel® Developer Cloud, the general availability of which was announced yesterday, and will include a set of interoperable interfaces to develop FHE software, translation tools and a sample simulator of its hardware accelerator.

Today’s news complements an action-packed first day of Intel Innovation 2023. Visit the Intel Newsroom to catch up on the announcements, which include news from Intel manufacturing, hardware, software and services.

 

 

Forward-Looking Statements

This release contains forward-looking statements, including with respect to Intel’s business plans and strategy, process and product roadmaps, and current and future technologies, as well as the anticipated benefits therefrom. Such statements involve many risks and uncertainties that could cause our actual results to differ materially from those expressed or implied, including: changes in demand for our products; changes in product mix; the complexity and fixed cost nature of our manufacturing operations; the high level of competition and rapid technological change in our industry; the significant upfront investments in R&D and our business, products, technologies, and manufacturing capabilities; vulnerability to new product development and manufacturing-related risks, including product defects or errata, particularly as we develop next generation products and implement next generation process technologies; risks associated with a highly complex global supply chain, including from disruptions, delays, trade tensions, or shortages; sales-related risks, including customer concentration and the use of distributors and other third parties; potential security vulnerabilities in our products; cybersecurity and privacy risks; investment and transaction risk; intellectual property risks and risks associated with litigation and regulatory proceedings; evolving regulatory and legal requirements across many jurisdictions; geopolitical and international trade conditions; our debt obligations; risks of large scale global operations; macroeconomic conditions; impacts of the COVID 19 or similar such pandemic; and other risks and uncertainties described in our earnings release dated July 27, 2023, our most recent Annual Report on Form 10-K and our other filings with the U.S. Securities and Exchange Commission. All information in this press release reflects Intel management views as of the date hereof unless an earlier date is specified. Intel does not undertake, and expressly disclaims any duty, to update such statements, whether as a result of new information, new developments, or otherwise, except to the extent that disclosure may be required by law.

1 Price and performance results may vary. For more information contact Granulate.io.