Check out the preliminary schedule for our workshop at SBRC 2026

The ICoNIoT workshop at SBRC will take place on May 27 and will feature a special keynote lecture by Torsten Braun, beginning at 10:30 a.m. Please see the table below for the scheduled times for each research theme. The schedule is subject to change until the date of the conference.

Time Thematic Line
10:15 – 10:30 Opening
10:30 – 11:00 Keynote “Energy-efficient Federated Transfer Learning for Privacy-Preserving Energy-Usage Forecasting”
11:00 – 11:15 IoT
11:15 – 11:30 Digital Health
11:30 – 11:45 5G/6G Networks
12:00 – 14:00 Lunch Break
14:00 – 14:15 Smart Cities
14:15 – 14:30 Edge Computing
14:30 – 14:45 Security
14:45 – 15:00 Vehicular Networks
15:00 – 15:15 Optical Networks
15:15 – 15:20 Closing

Research by Jéferson Nobre, of UFRGS and a member of ICoNIoT, combines confidential computing and anonymous communication

The goal is to improve cybersecurity solutions for cloud computing

Researcher Jéferson Nobre (UFRGS) has been working on a relatively new research approach related to cybersecurity applied to cloud computing: confidential computing. As the researcher explains, even though much is known about the field of cybersecurity, there are several challenges that are specific to cloud computing. These challenges require looking beyond what is normally considered in security.

What’s new?

Cloud computing has become the invisible backbone of our digital lives. Text messages, artificial intelligence systems, apps—practically everything depends on this infrastructure. In this context, what are the main security gaps, and where are we most vulnerable?
Today, our devices—especially smartphones—lack the capacity to process everything locally. That’s why we constantly send data to the cloud, where it is processed and returned. And this creates vulnerability, as it’s not uncommon for cloud providers to leak information—even if unintentionally. This is an attack on confidentiality and privacy, which has been observed in many incidents in recent years, fueling growing concern about this vulnerability.

In information security, two fundamental concepts are confidentiality and privacy. Confidentiality is the responsibility of organizations or companies that are accountable for user privacy. That is, service providers must ensure that only they and the individuals to whom they grant access rights will access user information. Users, on the other hand, have the right to keep their information private.

The greatest vulnerability, therefore, lies in processing. Although there are mature solutions for data at rest and in transit, the processing phase remains a gap that confidential computing seeks to fill in order to extend security guarantees to the moment of processing.

The contribution of Confidential Computing

In this context, confidential computing aims to provide a set of techniques and architectures that enable workloads to be executed in isolated environments, with formal guarantees of confidentiality and integrity.
Confidential computing is based on the idea that it is possible to create, within the cloud, a secure environment in which data can be processed without compromising confidentiality. This is made possible through so-called Trusted Execution Environments (TEEs). This technology is hardware-based and works by creating, within the processor itself, an isolated and protected area. In this space, both the data and the code remain encrypted, preventing external access—including by the cloud provider.
In addition, this environment supports a mechanism called remote attestation, which allows for remote verification that the code being executed is exactly the one that was originally submitted and that it is running within a secure environment. This increases confidence in the processing of sensitive data in the cloud.

Anonymous Communication

The problem is that, even with these approaches, it is still possible to identify who generated a particular workload by analyzing the traffic. This type of vulnerability is associated with so-called metadata attacks—that is, information about the data itself, such as who sent it, the volume transmitted, the time, and the frequency of interactions. To mitigate this risk, anonymous communication has emerged, whose purpose is to decouple the data from the identity of the sender by separating this information.

Currently, there are already some standards in this area. One of the main ones is OHTTP (Oblivious HTTP), a variation of the HTTP protocol that introduces anonymity into data transmission. This model requires the presence of an intermediary element independent of the organization (relay resource) and a gateway, acting between the user and the trusted execution environment. This adds an extra layer of protection, making it difficult to correlate the transmitted data with its source.

The case of messaging systems

As a central case study, Nobre highlights the Meta Private Processing system, which uses Trusted Execution Environments (TEEs), remote attestation, and the Oblivious HTTP protocol to process WhatsApp messages via the cloud (the only possible option, since it is not possible to process the information using AI with the resources of each user’s smartphone) without the company accessing the content or metadata.
The idea is to generate conversation summaries using AI without the provider having access to the content, which would guarantee user privacy (currently ensured by end-to-end encryption).
The solution would be a confidential computing + anonymous computing pipeline, enabling AI processing in a way that preserves the privacy promises WhatsApp makes. In this solution, no component has simultaneous access to the user’s identity, the content, and the execution environment.

Nowadays, Meta has Meta AI, which is manually added to a conversation and has access only to what the user explicitly sends, not to the user’s entire inbox. This is a superficial control. In the case of Private Processing, the user’s entire inbox is processed, and the user must voluntarily enable this feature. Confidentiality is ensured by a set of technologies that include TEE and OHTTP. An intermediary company is responsible for decoupling the source and destination, and another company handles the audit—which presents a challenge, as this company must be independent and reputable. Additionally, another organization plays a role in configuring cryptographic keys. For the adoption of these technologies, ecosystem fragmentation is one of the major obstacles.

Additional challenge

The encrypted environment within the cloud comes at a high cost. The use of TEEs requires that the Large Language Model (LLM) be run entirely within this secure environment. In this context, it is not appropriate to use general models belonging to service providers, such as those from Meta, since this could imply the use of processed data for training purposes. Thus, processing must occur in isolation within the trusted environment, ensuring that the data is used exclusively for task execution and subsequently deleted without any retention.

How to address confidential computing and anonymous communication

Confidential computing is capable of bridging the gap between data protection at rest/in transit and in use, which represents a real advance but does not constitute a complete solution for cloud system security. The guarantees depend on the integrity of the hardware, firmware, supply chain, and attestation services. Trust is extended and redistributed. In this context, anonymous communication is complementary, protecting metadata that anonymous computing alone does not cover. Auditability and transparency are non-optional requirements, as independent audits and immutable logs are part of the trust model.

Watch Jéferson Nobre’s presentation

On April 16, 2026, researcher Jéferson Nobre presented a webinar titled “Security Analysis of Confidential Computing and Anonymous Communication,” providing examples and diagrams to help illustrate the topics discussed. Watch it on our YouTube channel

Our next webinar will be held on April 30 with Dr. Ivan Zyrianoff

The seminar will be titled “Federated Learning at the Edge: Addressing Data Heterogeneity in IoT Systems”

Federated Learning (FL) and Edge AI are important building blocks of scalable and privacy-preserving intelligence in Internet of Things (IoT)-based systems. However, real-world deployments are inherently affected by data heterogeneity (non-IID distributions) across settings, which significantly degrades model performance and convergence. In this talk, we present a system-oriented approach to edge intelligence, combining on-device inference, federated training, and architecture-level solutions to address heterogeneity. We begin with edge-native AI pipelines for real-time sensing and inference, highlighting how local processing reduces latency and communication overhead. We then discuss federated transfer learning strategies that enable collaborative model training across distributed clients while preserving data locality. Finally, a novel federated architecture based on a client-shared latent space, which improves robustness to non-IID data by aligning semantic representations across clients while reducing communication costs.

The speaker

Ivan Zyrianoff received the B.S. degree in computer science and the M.S. degree in information engineering from the Federal University of ABC, Santo André, Brazil, in 2017 and 2019, respectively, and the Ph.D. degree from the University of Bologna, Bologna, Italy, in 2024. He is a Research Fellow from the University of Bologna, Bologna, Italy, and a member of the IoT-Prism Lab. His current research topics encompass interoperability for the Internet of Things, edge computing and intelligence, and proactive caching.

Prof. Dr. Torsten Braun is going to speak at the ICoNIoT Workshop at SBRC

The talk, titled “Energy-efficient Federated Transfer Learning for Privacy-Preserving Energy-Usage Forecasting,” will take place at the opening of the workshop, scheduled for 2:00 p.m. on May 27, 2026.

Torsten Braun

Read the abstract of the presentation:

Accurate forecasting of residential energy demand is
increasingly critical due to growing household electrification, renewable
integration, climate variability, and diverse consumption patterns.
Centralized forecasting models have privacy issues and limitations in
dynamic environments. This keynote introduces PEFEDTL, a personalized
federated transfer learning (FTL) framework for multivariate energy
forecasting in smart homes. It combines temporal convolutional networks
with a global attention module and cluster-based personalization. Many
other existing FTL approaches largely overlook device heterogeneity and
resource constraints, leading to suboptimal efficiency and limited
applicability in real-world edge environments. To address this gap, we
discuss possible approaches for energy-efficient FTL and present
Resource-Aware Federated Transfer Learning (RA-FTL), a framework that
adapts both model architecture and resource utilization to heterogeneous
client capabilities.

Bio – Prof. Dr. Torsten Braun

Head of the Communication and Distributed Systems (CDS) research group atthe Institute of Computer Science, University of Bern, where he has been a
full professor since 1998. He got the Ph.D. degree from University of
Karlsruhe (Germany) in 1993. From 1994 to 1995, he was a guest scientist at
INRIA Sophia-Antipolis (France). From 1995 to 1997, he worked at the IBM
European Networking Centre Heidelberg (Germany) as a project leader and
senior consultant. He has been a vice president of the SWITCH (Swiss
Research and Education Network Provider) Foundation from 2011 to 2019. He has been a Director of the Institute of Computer Science at University of
Bern (INF) between 2007 and 2011, and from 2019 to 2021. Currently, he
serves as a Director of Studies at INF. He has been panel member of several
national research funding organizations such as Switzerland, Luxembourg,
Denmark, Finland, Norway, and Sweden. He has been supervising more than 40 PhD students, several of them with joint PhD supervision agreements with Unicamp and UFPA, Belem (Brazil).

“Digital Transformation for a World in the Midst of a Climate Emergency” is central theme of CSBC 2026

How can computing help address today’s environmental challenges?

The central theme of CSBC 2026, “Digital Transformation for a World in the Midst of a Climate Emergency,” invites critical reflection on the role of digital technologies in mitigating environmental impacts and building a more sustainable future.

The event will feature technical sessions, panel discussions, and lectures by national and international experts, providing a qualified space for interdisciplinary discussion on technological innovation and sustainability.

Learn more about CSBC 2026

Researcher Dr. Jéferson Nobre will host a webinar on April 16

The presentation will be titled “Security Analysis of Confidential Computing and Anonymous Communication”

Confidential Computing extends security guarantees to the data processing stage, providing confidentiality and integrity during execution through Trusted Execution Environments (TEEs) anchored in specialized hardware.

Anonymous Communication, in turn, protects the identity of the parties and the metadata associated with interactions—information that remains exposed even when the content is protected by encryption. This presentation discusses the technical foundations of both paradigms and their security properties under a realistic threat model, demonstrating that their guarantees are conditional and depend on a chain of trust based on hardware, remote attestation, and proper separation of responsibilities among components.

As a case study, we analyze Meta’s Private Processing for WhatsApp, which combines TEEs, Oblivious HTTP, and immutable logs to enable AI features while preserving user privacy, illustrating the complementarity between Confidential Computing, Anonymous Communication, and End-to-End Encryption (E2EE).

The speaker

Professor at the Federal University of Rio Grande do Sul (UFRGS). Member of the Brazilian Computer Society (SBC). He holds a bachelor’s degree in Electrical Engineering from the Federal University of Rio Grande do Sul (2002), a master’s degree in Computer Science (2010), and a Ph.D. (2015) from the same university. He completed a sandwich PhD program (2011–2012) at Cisco Systems (USA). He completed postdoctoral research at the Federal University of Pará (2016). Experienced in the field of Computer Networks and Distributed Systems, with an emphasis on Computer Network Management and Security.

 

Latin America’s largest computing event will hold its 46th edition in 2026

The Brazilian Computer Society (SBC) Conference is an annual event organized by the Brazilian Computer Society (SBC). The 46th edition of the Conference will be held in Gramado, Rio Grande do Sul, from July 19 to 23, 2026.

Over more than four decades, the CSBC has become the most important national scientific event in computer science. The CSBC’s excellent reputation is evident in the quality and significant number of paper submissions to its ten main sub-events and sixteen satellite events.

Other characteristics, such as the diversity and breadth of activities carried out, the relevance of the topics addressed, and the professionalism of its organization, have contributed to CSBC’s consolidation in the calendar of national scientific events and its emergence as the most important event in the field of computer science in South America.

With over four decades of history, CSBC has established itself as the leading scientific forum in the field in the country, bringing together approximately two thousand participants annually, including researchers, students, and professionals from Brazil and abroad.

The event is organized by the Brazilian Computer Society (SBC), the leading scientific entity in the field in Brazil. For this edition, the organizers are researchers Weverton Cordeiro and Alberto Egon Schaeffer Filho (UFRGS), affiliated with INCT ICoNIoT.

Save the date and follow updates via social media and the ICoNIoT newsletter.

 

AI and Edge Computing: Marcelo Claudio Sousa Araújo’s postdoctoral research uses AI to reduce latency for users on the move

The research is supervised by researcher Luiz Fernando Bittencourt

Researchers in the fields of IoT and computer networks are focused on addressing a critical challenge of contemporary life: maintaining fast, uninterrupted connectivity for users who are constantly on the move.

This effort centres on Edge Computing, in which processing is distributed across multiple locations.

The core of the research by ICoNIoT researcher Marcelo Araújo, carried out as part of a postdoctoral fellowship under the supervision of Professor Luiz Fernando Bittencourt (UNICAMP), aims to ensure that the dataset that would normally be processed in the cloud accompanies the user during their daily journeys to work, leisure activities, etc. The primary objective is to improve latency and the overall user experience, ensuring that response times remain as low and fast as possible.

The challenge of edge environments

Despite the need for proximity, there is a major issue requiring research: environments located at the network edge – which may be mini data centres or even routers – are less robust and have less computing power. The research therefore aims to find the best possible way to carry out this data handoff between these environments.

The proposed solution involves creating an algorithm that can be adapted for this specific purpose, capable of identifying when the user is on the move.

Deep learning in decision-making

Araújo’s postdoctoral project combines concepts from Artificial Intelligence (AI) with the work he had already been developing. The focus is on deep learning to enable the computer system to make the best decisions autonomously.

The system evaluates a range of data and metrics to decide how to manage mobility, including:

• Predicting the user’s mobility;
• Checking latency;
• The user’s distance;
• Assessing the infrastructure near the user, particularly if it is congested.

The use of techniques such as DRL (Deep Reinforcement Learning) increases the flexibility of the metrics that the computer system will evaluate during the decision-making process.

Simulations to Overcome Limitations

One of the major obstacles in this type of project is the high cost involved in carrying out hyper-realistic simulations. Researchers get round this limitation in the early stages by using simulators that incorporate accurate features of the real environment. Furthermore, it is possible to model the behaviour and actions that a user would carry out in their daily life. In the case of Araújo’s project, a synthetic map of the city of Athens was created. This map serves to execute the system’s logic and run a simulation that approximates a real environment.

 

The webinar ‘Enabling Real-Time Systems & AI at the Edge’ will be presented on 2nd April by Dave Cavalcanti

In his presentation, entitled ‘Enabling Real-Time Systems & AI at the Edge’, Dr Dave Cavalcanti, Senior Engineer at Intel, will examine the architectural and system-level fundamentals required to enable deterministic real-time computing in conjunction with AI applications on modern edge platforms.

He will focus on time-coordinated computing, mixed-criticality workloads and the convergence of computing and networking as key factors for next-generation cyber-physical systems.

The presentation will discuss hardware, software and networking features, such as Time-Sensitive Networking, benchmarking tools and their role in enabling AI-enhanced real-time systems across various vertical markets.

The speaker

Dave Cavalcanti is a senior engineer at Intel Corporation, with extensive experience in distributed networking systems, connectivity, industry standards and ecosystems. He also serves as chairman of the Avnu Alliance, an industry forum that promotes standards and certification programmes to enable deterministic real-time performance based on interoperable Time-Sensitive Networking (TSN) devices and converged networks.

He obtained a PhD in Computer Science and Engineering in 2006 from the University of Cincinnati, a Master’s degree in Computer Science and a Bachelor’s degree in Electronic Engineering from UFPE in Brazil. He has published over 50 peer-reviewed articles and holds more than 125 granted patents.

K8s-DT – Find out about Professor Francisco Airton’s project

In the research project led by Professor Francisco Airton (UFPI), he and his students Iure Fé (PhD), José Miqueias (MSc) and Lucas Lopes (MSc) are developing analytical models based on Petri nets, which are used to mathematically represent any distributed system. They create a diagram describing the system and perform probabilistic calculations to obtain various metrics, using a specific tool for this purpose.

Airton believes that these models can also be interpreted as digital twins. Based on this, the group selects a specific information system — in the case of the PhD student, Kubernetes — and constructs Petri net models that represent the deployment of Kubernetes. After modelling this system, they integrate the model with software capable of running it and monitoring Kubernetes in real time.

The master’s students, meanwhile, use other monitored systems: one works with camera monitoring, and another with a drone simulator.

The team is preparing papers for this year’s SBRC, describing how the digital twin platform can outperform different types of autoscaling. In this case, the system tests different configurations and simulates ‘what-if’ scenarios, which allows the best option to be identified before applying it in the real environment.

In traditional client-server architecture, Kubernetes operates alongside servers and can connect to any client, including IoT devices. On these devices, a set of sensors generates data that varies depending on the context, such as the time of day or traffic flow. This variation in demand requires the Kubernetes deployment configuration to be adjusted dynamically. This is where K8s-DT comes into play, predicting the best new configuration to be implemented.