Working Student Security of AI/ML Data Collection in 6G
Nokia
Nokia Strategy & Technology, including Nokia Bell Labs and Nokia Standards, is the renowned research arm of Nokia, having invented many of the foundational technologies that underpin information and communications networks and all digital devices and systems. This research has produced nine Nobel Prizes, five Turing Awards and numerous other awards.
The department of Core Architecture in Nokia Standards, Munich, offers the opportunity for a master student to gain practical experience through working with highly skilled colleagues in a scientific environment and contributing to future-looking projects.
6G networks will rely on AI/ML for intelligent resource allocation and adaptive communication. These models require large-scale data collection, such as channel state information (CSI). Ensuring data integrity and confidentiality is critical because compromised data can lead to incorrect predictions and security breaches. The student will model attack scenarios and validate/evaluate mitigation methods through simulation experiments.
Your skills and experience
We are looking for highly motivated candidates that are excited about developing cutting-edge technologies for 6G. The candidate should fulfill the following qualifications:
- A bachelor's degree in communications engineering, information theory, or related fields with a grade of at least 2.0 in German grade system (or equivalent in other grade systems)
- Basic knowledge in wireless communication networks, signal processing techniques, network security, and AI/ML
- Proficiency in relevant programming languages for quantitative evaluations (e.g. C++, Matlab, Python)
- High proficiency in spoken and written English
- Ability to work independently
- Readiness to collaborate in an international team with different cultural backgrounds
- Excellent communication skills
Background
6G networks will rely on AI/ML for intelligent resource allocation and adaptive communication. These models require large-scale data collection, such as channel state information (CSI). Ensuring data integrity and confidentiality is critical because compromised data can lead to incorrect predictions and security breaches. Robust security mechanisms are essential for trustworthy AI/ML in future wireless systems. Developing and validating countermeasures against these threats is key to secure AI/ML deployment.
Potential Security Threats
- Model Poisoning Attack: Tampering with model parameters from distributed nodes, disrupting global model accuracy.
- Data Poisoning Attack: Malicious modification of training data, causing incorrect inferences.
- Inference Attack: Exploiting API queries to replicate model behavior and infer training data.
Tasks and responsibilities
- Model and simulate attack conditions.
- Implement security methods to defend against selected attacks.
- Validate security methods through experiments.
- Measure and analyze the performance of these methods under different scenarios.
Expected Outcomes
- Experimental setup simulating attacks and security countermeasures.
- Measurement results and performance analysis of security methods.
- Presentation of (intermediate) results project meetings
- Development of invention reports