The soft hum of anticipation fills the station as you stand at the platform, ticket in hand, waiting for the arrival of the Cybersecurity Express. The air feels charged with excitement—an adventure packed with cutting-edge insights is just moments away. You glance at the station’s clock, watching the minutes tick by, each one bringing you closer to a journey where every stop reveals a new layer of the digital world’s most pressing issues. You can almost hear the train in the distance, the rumble of knowledge speeding toward you, ready to whisk you away into the heart of cybersecurity.
The train finally arrives with a smooth, whispering halt. The doors glide open, and you step aboard, taking in the atmosphere—the soft glow of monitors, the faint pulse of data traveling unseen. As you settle into your seat, a voice announces the first destination: “Next stop, vulnerability in widely-used software.” The digital landscape outside the window begins to blur with code and algorithms, as your journey unfolds into the deep, ever-evolving world of cybersecurity. Hold tight; your first article is just ahead, and it promises to reveal the latest threat lurking in the virtual shadows.
Sometimes reality can be more incredible than Si-Fi, or at least that is what some researchers have proven. They have uncovered a sophisticated side-channel attack, dubbed “PIXHELL,” that can exfiltrate sensitive data from air-gapped and audio-gapped systems without the need for speakers or any specialized audio hardware. Instead, this innovative attack leverages acoustic signals generated by the LCD screens themselves, turning pixel patterns into covert data transmission channels.
Developed by Dr. Mordechai Guri from the Ben-Gurion University of the Negev, PIXHELL takes advantage of unintended acoustic emissions generated by the components of an LCD screen, such as capacitors and coils. These components can produce a high-frequency noise, commonly referred to as coil whine, which is exploited by malware to encode and transmit data. The signals are modulated through pixel patterns displayed on the screen, producing acoustic waves that can be captured by nearby devices such as smartphones or laptops equipped with microphones.
In a typical PIXHELL attack, the malware running on the compromised system encodes sensitive data—such as encryption keys or keystrokes—using modulation schemes like On-Off Keying (OOK), Frequency Shift Keying (FSK), or Amplitude Shift Keying (ASK). The modulated data is then transmitted as acoustic signals via the LCD screen’s components. The malware carefully controls the pixel patterns displayed on the screen to produce specific acoustic frequencies within the 0 – 22 kHz range, which are barely audible to humans but detectable by a microphone on a nearby rogue device.
The attack has been demonstrated to achieve data exfiltration at distances of up to 2 meters (6.5 feet) with a data rate of 20 bits per second (bps). While this may not be practical for transferring large files, it is sufficient for keylogging, stealing passwords, or transmitting small chunks of sensitive data, such as encryption keys. This makes the PIXHELL attack particularly dangerous in scenarios where even a small data leak could have catastrophic consequences.
PIXHELL’s ability to breach air-gapped systems—the most secure environments designed to be physically isolated from external networks—highlights the possible complexity of side-channel attacks. Air-gapped environments, often used in critical infrastructure or defense systems, are intended to be highly secure by eliminating any network connection. However, PIXHELL demonstrates that these systems remain vulnerable to physical side-channel exploits that take advantage of unintended emissions.
One of the most concerning aspects of PIXHELL is its stealth. The pixel patterns used to modulate data are low-brightness or invisible to the user, making the attack hard to detect. Additionally, the acoustic signals generated are outside the normal human auditory range, further increasing the difficulty of identifying the attack in real-time.
To defend against such attacks, several countermeasures are recommended. First, banning microphone-equipped devices, such as smartphones, in sensitive environments can mitigate the risk. Implementing background noise generation or jamming techniques can disrupt acoustic signals, reducing the signal-to-noise ratio (SNR) and making the attack less effective. Monitoring pixel patterns on screens using external cameras can also help identify unusual behaviors that may indicate an ongoing PIXHELL attack.
Dr. Guri’s research into side-channel attacks, including PIXHELL and earlier attacks like RAMBO, continues to demonstrate how even the most secure systems can be vulnerable to innovative exploits. As organizations seek to protect their most sensitive data, it becomes increasingly critical to address not just software vulnerabilities but also the unintended physical side effects of hardware components.
The full technical details of the PIXHELL attack, along with mitigation strategies, can be found in the research paper titled “PIXHELL Attack: Leaking Sensitive Information from Air-Gap Computers via ‘Singing Pixels’” published by Dr. Guri.
Sticking with the wizard Dr. Guri, as he keeps demonstrating he is doing magic, he and his team have discovered a novel side-channel attack, from the same Sci-Fi genre, dubbed RAMBO (Radiation of Air-gapped Memory Bus for Offense), that exploits electromagnetic radiation emitted by a computer’s random access memory (RAM) to exfiltrate sensitive data from air-gapped systems. Fun fact, random access memory is an archaic term which does not accurately describe the memory modules of today, as there is nothing random happening in them.
Developed by Dr. Mordechai Guri, head of the Offensive Cyber Research Lab at Ben-Gurion University of the Negev, the technique demonstrates how air-gapped networks, traditionally considered highly secure, can still be compromised through unconventional means. In a recently published paper, Dr. Guri explains that RAMBO allows malware to encode data, including files, keystrokes, images, and encryption keys, into radio signals generated by manipulating the computer’s RAM. These signals, transmitted at clock frequencies, can be intercepted by an attacker using a software-defined radio (SDR) and a simple antenna. Once intercepted, the radio signals can be decoded and translated back into binary information, providing the attacker with access to confidential data.
The RAMBO attack is part of a broader series of side-channel attacks devised by Dr. Guri, which exploit unintended emissions from various computer components to leak data from air-gapped systems. Previous methods include using electromagnetic emissions from power supplies (AirKeyLogger), covert acoustic signals from GPU fans (GPU-FAN), and even visual signals from printer LEDs (PrinterLeak).
What makes RAMBO particularly concerning is its ability to exfiltrate data from air-gapped computers at a rate of up to 1,000 bits per second. This allows attackers to steal sensitive information, such as real-time keystrokes, encryption keys, and small files, in a matter of seconds. For example, a 4,096-bit RSA encryption key can be extracted in just under 42 seconds at low speed. The attack also enables the exfiltration of biometric information and small documents (.jpg, .txt, .docx) within a few minutes.
While RAMBO requires the target system to be compromised by other means—such as via rogue insiders, infected USB drives, or supply chain attacks—it underscores the ongoing threat posed by advanced data exfiltration techniques, even in highly secure environments.
To block RAMBO attacks, Dr. Guri recommends implementing a range of countermeasures, including enforcing “red-black” zone restrictions to isolate secure areas, deploying intrusion detection systems (IDS), monitoring hypervisor-level memory access, and using radio jammers or Faraday cages to disrupt electromagnetic signals, although it’s hard to say if we’ll be seeing anything similar soon used outside of research labs.
As side-channel attacks continue to evolve, RAMBO serves as a reminder that even air-gapped systems, long considered a robust defense against cyber threats, can still be vulnerable to highly sophisticated methods of data exfiltration.
Apple has officially joined the artificial intelligence (AI) race, unveiling a suite of AI-driven features that emphasize privacy and security—marking a significant shift in how the company is positioning itself against competitors like Google, Microsoft, and Meta. During its annual WWDC event, Apple previewed its AI products, including improved voice transcription, Siri upgrades, enhanced photo editing, and integration with OpenAI’s ChatGPT. Under the banner of “Apple Intelligence” the company’s new AI capabilities aim to differentiate themselves by prioritizing on-device processing to keep user data private.
Apple, long known for positioning itself as a privacy-first company, highlighted that its AI features will handle as much data processing locally as possible, reducing the need to send sensitive information to cloud servers. Many tech firms rely on cloud computing for AI services, which often involves sending data to external servers for processing. This data, processed in the cloud, can be more vulnerable to unauthorized access or exploitation. Apple’s approach contrasts with cloud-heavy AI models used by other tech giants, which collect vast amounts of user data to optimize their AI services.
While the cloud-based approach is essential for handling large-scale AI tasks that require significant computational resources, Apple’s solution promises to limit data sent to external servers. For tasks that do require cloud assistance, Apple said it will use a privacy-preserving method that encrypts data before sending it, ensuring minimal data exposure. The company also pledged that it would not store or utilize user data beyond what is necessary for completing AI tasks. To enhance transparency, Apple will allow independent researchers to inspect its AI security protocols.
Apple’s commitment to privacy in AI sets it apart from its Silicon Valley competitors, many of which rely on data collection for advertising and AI optimization. Unlike Meta, Google, and Amazon, Apple’s primary revenue stream comes from hardware sales rather than user data, allowing the company to adopt a privacy-centric stance. However, Apple has faced criticism from privacy activists in the past, and its move into AI will put its privacy claims under greater scrutiny. The company appears to be leveraging its reputation for privacy as a key selling point in the AI market.
This strategy positions Apple as a stark alternative to rivals that have faced backlash for their data-handling practices. Companies like Meta, Google, and Microsoft have been criticized for the extensive data they collect for AI development. In a statement at WWDC, Apple’s software engineering chief, Craig Federighi, underscored this approach, stating, “You shouldn’t have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud.”
Despite the promise of greater security, not everyone is convinced. Tesla and SpaceX CEO Elon Musk publicly criticized Apple’s AI partnership with OpenAI, calling it “creepy spyware” and an “unacceptable security violation.” Musk, a co-founder of OpenAI and now a vocal critic, questioned Apple’s ability to ensure data protection when working with third-party AI providers.
Apple’s entry into the AI market, backed by promises of enhanced privacy, adds a new dimension to the ongoing AI arms race. As the company competes with tech giants focused on cloud-based AI solutions, its privacy-focused approach could redefine the conversation about the balance between innovation and user security.
This wraps up today’s issue. Wherever you are out there in the digital world just stay safe, install the latest patches and keep a watchful eye out for anything that might want to deceive you. Thank you so much for being a wanderer on The Cybersecurity Express and we look forward to welcoming you on board the next time.