6G Wireless Networks Promise New Capabilities to Support Autonomous and Immersive Services

Highlights:

  • Initial work on 6G wireless network specifications will begin in 2025, and this NextG is expected to launch in 2030.

  • Future 6G networks will integrate new capabilities such as distributed computation, real-time learning, and sensing to support emerging autonomous and immersive services.

  • Intel Labs is developing key directions that show promise for 6G networks: RAN intelligence and automation, NextG wide area cloud, joint communication and sensing, distributed AI/ML, and network coding.

author-image

By

The fifth generation of wireless networks has delivered significant new capabilities in wireless networking. Current 5G specifications led to the development of an integrated framework for ultra-dense heterogeneous networks to deliver Gigabits per second (Gbps) connectivity, ultra-reliable low latency, and support for massive connectivity. In an abstract sense, 5G represents a paradigm shift from previous generations, moving from human-centric to machine-centric communications.

Initial work on 6G specifications will begin in 2025, and this NextG is expected to launch in 2030. As we move into the 6G era with more sophisticated devices and services, the focus will shift beyond the communication capabilities of wireless networks. We envision future networks that will integrate new capabilities to play an expanded role as a platform for distributed computation, real-time learning, and sensing to support emerging autonomous and immersive services.

This new direction requires a different system design approach that takes a cross-disciplinary view across communications, computing, and intelligence. Our task is to develop fundamental innovations that lie in the intersection of these disciplines. Intel Labs is developing key directions that show promise for 6G networks: RAN intelligence and automation, NextG wide area cloud, joint communication and sensing, distributed AI/ML, and network coding.

RAN Intelligence and Automation

Artificial intelligence and machine learning (AI/ML) enable a flexible implementation of a virtual Radio Access Network (RAN), transforming the RAN to become open, intelligent, and virtualized.

Intel Labs is developing AI-based techniques for the physical layer, MAC, and network layer algorithms. These solutions improve key performance indicators such as throughput, latency, coverage, and energy efficiency using the following Intel software and hardware.
 


Intel Labs developed Domain Knowledge Enhanced-Neural Networks (DKE-NN) techniques, making NNs smarter by injecting domain knowledge during NN training to solve Physical Layer (PHY) problems and design future generation Air Interfaces.

Watch a demonstration of DKE-NN PoC for channel estimation.

For MAC and network layers, Intel Labs is developing scalable and energy-efficient AI algorithms for cloud-native RAN operation and service automation based on O-RAN framework. Beyond algorithm designs, Intel Labs is also building reference implementations and PoC with the ecosystem partners for AI-enabled RAN Intelligent Controller (RIC) xApps, leading the next wave of network transformation.

Intel Labs supports the AI-native 6G vision and contributions to pre-6G standardization in NextG Alliance, O-RAN, 3GPP, and other standards development organizations/fora. Intel Labs is leading the European Telecommunications Standards Institute’s (ETSI) AI coordination activity.

NextG Wide Area Cloud

One major driver for NextG is the convergence of mobile communications and cloud computing. The 6G Wide Area Cloud (WAC) is envisioned as a compute-plus-networking platform that will enable intelligent and ubiquitous computing, communication, and data services spanning regional and metro area data centers, cell sites, on-premises equipment, and client devices. In 6G WAC, ubiquitous, seamless computing will allow compute/AI workloads to be distributed across devices, networking nodes, edge servers, and data centers to achieve advanced performance for various applications.

Intel Labs is working on a distributed cloud framework combining the advantages of local (client and edge nodes) and centralized (data center) computing and communication.

Intel Labs contributes to the Next G Alliance’s (NGA) 6G Technologies white paper for management and orchestration and ETSI’s Multi-access Edge Computing Initiative Industry Specification Group (ISG).

Joint Communication and Sensing

Joint communication and sensing (JCAS) for NextG refers to the integration of sensing capability into future communication networks with efficient reuse of spectrum and network infrastructure. Merging connectivity with advanced sensing capabilities transforms the future of wireless technology by enabling new services and applications and enhancing network performance via improved channel awareness. The envisioned sensing applications include intelligent transportation systems, environmental monitoring, intruder detection, digital twins for smart cities and factories, and more.

Intel Labs is developing key design solutions and algorithms to enable joint communication and sensing in NextG and unlock a transformative era of wireless innovation. Intel Labs provides solutions to enable sensing in NextG systems as an extension of 5G New Radio (NR) user positioning framework, which is compatible with virtualized and open radio network architectures. Intel Labs develops algorithms to improve achievable sensing performance in cellular systems and jointly improve network performance by leveraging sensing information despite the intrinsic challenges in resource availabilities and resource sharing between sensing and communication services.

Distributed AI/ML

Distributed AI/ML use cases are rapidly emerging as data is increasingly generated at the edge by smart Internet of Things (IoT) devices, including streaming data such as video surveillance, images, health-related measurements, and traffic/crowd statistics. A Gartner report suggests that 50% of the 175 zettabytes of data generated by 2025 will be from IoT devices, which must be analyzed at the edge. Collaborative AI solutions that process data locally promise to deliver better accuracy through access to large and diverse datasets by offering privacy and lower bandwidth/latency costs of moving data to the cloud for centralized learning.

The industry is now working to enable 6G WAC, which will drive support for distributed AI/ML workloads pervasively across 5G/6G networks. The 3GPP and ORAN standards are already working to include support for distributed AI. For example, Federated Learning (FL) in 5G advanced standards are underway (3GPP TR-23.700-80 and TR-33.738), with momentum expected to accelerate as 6G standardization is kicked off.

Intel Labs is developing solutions that address the unique challenges of learning locally from distributed data collected at the network edge. These challenges are distinct from centralized learning and arise from the wireless edge's dynamic and resource-constrained environment, as well as the heterogeneity in compute, communication, and data resources available at each collaborating device. All these factors can significantly affect learning performance in terms of overall accuracy, model fairness, and learning time. There is also an increased potential for adversarial attacks from rogue devices and privacy leakage when ML models are shared between collaborating devices and edge/cloud servers. Intel Labs has developed several solutions that improve learning performance while addressing data privacy of distributed AI/ML computations in resource-constrained settings (CFL-JSAC-21, JSAN-21, ICML-21, DP-CFL-DSLW-21, and FLSys-23). We also support an open-source Federated Learning library (OpenFL), which allows development and experimentation on FL solutions. Our work on FLSys-23 specifically develops a continuous FL solution for a distributed autonomous driving application by expanding and integrating the OpenFL tool with Intel’s autonomous driving simulator CARLA and wireless connectivity models.

Intel Labs has also developed distributed/decentralized compute orchestration mechanisms that can orchestrate AI compute workloads on demand and optimally adapt to the dynamically changing computing environment at the edge to meet the required quality of service (DCN-GCOM-20, GC-2022, and WCNC-23).

Intel Labs is contributing to 3GPP standards to address issues related to distributed/federated learning in 5G advanced networks as well as several industry white papers (5G Americas) covering distributed AI/ML topics (5G-Edge-20, 5G-Int-Edge-21, and DCC-5G-22).

Finally, the National Science Foundation (NSF) and Intel Labs have funded the NSF/Intel Partnership on Machine Learning for Wireless Networking Systems (MLWiNS), a university research program that has a significant focus on advancing efficient computation of AI workloads on wireless edge networks, as well as on co-optimizing wireless networks with AI/ML workloads.

Network Coding

Next generation wireless networks provide multiple independent data paths between any transmit and receive nodes. Different forms of this infrastructural redundancy include simultaneous connections via multiple radio access technologies (multi-RAT), dual/multi-connectivity and carrier aggregation, integrated access and backhaul (IAB), and more. Network redundancy can provide an extra degree of freedom to achieve reliable and resilient data communication with low latency over inherently unreliable wireless channels, for example, mmWave links suffering from link blockages. However, traditional techniques such as PHY layer channel coding, lower layer retransmissions (HARQ/ARQ), and packet duplication are either incapable or inefficient in utilizing such an opportunity.

Network coding (NC), such as linear coding at the packet level, is a good candidate to efficiently utilize multi-path infrastructure redundancy and supplement PHY layer channel coding techniques to further enhance reliability with desirable latency. By proactively adding redundant encoded packets to the traffic flow and transmitting the protected packets' overall data routes, the linear packet coding scheme can treat the lossy multi-route network as a single data pipe and efficiently make use of all aggregated bandwidth. Based on the developed prototype at Intel Labs, the encoding/decoding latency is small at the microsecond level, which leads to Gbps-level throughput performance.