Monday, 29 April 2019

Accepted paper for EuCNC2019




Saturday, 6 April 2019

Resource sharing is a necessary evil.

Tuesday, 26 March 2019

Reflections on URLL appplications in 5G

Yes, I am sure all of us have heard about the 1ms latency requirement for some envisioned applications under the 5G umbrella. Ultra reliable low latency is an important requirement for deployment scenarios where a rapid exchange of data is expected, like critical machine communications of connected vehicles.
I think this has an important consequence regarding the way we perform measurements of network parameters and the current monitoring schemes in our networks.
Notice the timescales we use at this moment in network management systems. We still measure, for example, throughput in the seconds scale: 10MBps, 100 GBps. I believe we should begin upgrading this timescales, exploring possibilities to perform measurements of common network parameters, where apply, in sub second scales. This will help to have a better vision of the network state and monitor the service much better.
Even more important, is the actual data delivery. We need a high 'goodput'. TCP/IP is long known for its large overhead. Should we get serious to implement more efficient protocols? Embrace deterministic networking? All these "little" improvements make an important echo at the service layer.
Same applies to data center (DC) availability and reliability. Is the 5 nines approach still valid? As DC are moving closer to the edge, and due to its software capabilities, it is necessary to re-think the metrics we use to evaluate its performance.

Any thoughts about this?  This merits further exploration.

Monday, 10 December 2018

TLS v1.3 vs. eTLS


Some months ago, ETSI released the TS103 523-3, which describes the usage of an implementation variant of TLS v1.3 for the Middle Box Security Protocol.

The key complaint is that TLS v1.3 does not allow passive decryption, which is necessary to comply with requests from authorities to have clear text information of an exchange of traffic. This, because TLS v1.3 does not support RSA key exchange and uses ephemeral Diffie-Hellman, instead of static Diffie-Hellman key exchange. These two improvements constitute a high degree of security compared to the previous version of TLS.

I have seen some comments on social networks complaining about the nature of ETSI’s technical specification, which would constitute a downgrade of the security level of the data in transit for the users. And, well, I have some quick thoughts about this.

eTLS features a scheme for longer-lived static Diffie-Hellman keys and allow to re-use the keys across multiple sessions. These characteristics pose high risks, since it would imply to go back to a similar state that we had in TLS v1.2. But if we analyze the use cases under which eTLS is used, I see no difference with common practices in Internet Service Providers (ISP) regarding the routing of encrypted user traffic.

The most common use case is described in clause 4.2.1 (eTLS with enterprise servers) which describes the situation when a customer wants to access some content offered by an ISP.
It describes that TLS v1.3 is used up to the border firewall, and from that point on, eTLS is used between firewall and internal servers.
I find that is a usual practice to break the TLS connection at some point inside the ISP. It may be at the border firewall or in an internal device such as a WAF (Web Application Firewall). This is done to manipulate the user flows in the internal infrastructure. That is the case when it is necessary to perform load balancing between servers. The key idea in here is that the encryption of this data in transit is managed internally by the ISP, secured by certificates between the involved servers. The confidentiality and integrity of the data is not lost and the ISP can have the way to manage the traffic to offer a better service.

In summary, TLS v1.3 would be used over the public Internet (where the most dangerous threats are located), and the less secure eTLS is used inside ISP premises (where one suppose that the threats are reduced).
Now, the worry is about how the ISP is going to manage the static keys (their life cycle management, up to their destruction to guarantee forward secrecy), assure their chain of custody and ensure that there is no abuse of privileges from employees that could use these powerful “keys” to read personal data. This is a topic that relates to the trust that we deposit in our service providers.



Monday, 19 November 2018

AI for evil. #EuroCyberWeek




C&ESAR 2018, day 1

During this week, Rennes will be hosting the European Cyber Week. This is a great opportunity for communication service providers (CSP), manufacturers, industry players and end users to meet and present current research activities about Artificial Intelligence and cyber security.

Opening speeches stressed on the importance on AI for defense, since each day networks and services experience attacks, not only from external actors, but from inside our networks. The idea is not only to disrupt service, but to steal data, jeopardizing research and intellectual property. This constitutes the rise of adversarial AI, and to react to this menace, it is necessary to enhance our understanding and usage of generative adversarial networks.

Other interesting topic is about "opening the black box" in order to have an explanatory AI, which has the goal of explaining the reasoning of the decisions taken by AI. This is important to gain confidence of users and to speed up the adoption of these techniques in various use cases.

Some other topics that caught my attention:

  • The diverse malicious AI techniques to break cyber-security, such as data poisoning. This in order to induce errors on the machine learning model.
  • The bias problem, which most of the times has its source of the training data and leads to erroneous decisions at the end of the AI process.
  • The usage of different frameworks to have behavioral analysis, very useful to detect deviation of usual usage patterns of an entity. This helps to detect compromised entities our users that deviate or abuse of privileges.
I hope day 2 brings new insights and interesting approaches to better understand AI and its challenges.

Saturday, 17 March 2018

InOut2018: summary

Thanks to IMT Atlantique and the approval from my supervisors at my company, I had the opportunity to assist to InOut2018, an event about digital mobility perspectives for cities.

Few important remarks about the keynotes and my point of view about the topics, having  in mind the objectives of my thesis:


From keynote: Artificial Intelligence: Friend or foe? 
By NicolasDemassieux.

·         Besides the definition of a goal and find out a way to reach a goal, that path towards the problem resolution has to be bounded to a specific and delimited scenario.
·         A.I. is designed to do only one task the best way possible. If the problem specification changes or if there is a change of the rules governing the problem, the whole learning process should be restarted again.
·         Real problems of Artificial Intelligence:
o    Errors: what are their nature and what are the consequences of the errors (i.e. false positives / false negatives).
o    Algorithmic bias: The bias is lower if there is a richer data base to learn from.
o    Explanation of actions. At this moment, Artificial intelligence is a black box. There is a trend to build an “explicative AI” in order to have an explanation of the undertaken actions.
“Gold rules” for a reasonable Artificial Intelligence:
·         Information of the purpose and possible debate of it.
·         Involvement of the users.
·         Supervise the algorithms: it would be great to have algorithms that verify other algorithms, for example, to verify there is no bias.
Artificial intelligence can be used for evil and good. It empowers “both sides of the coin”. We must leverage on it to learn how to stop cyberattacks (among other use cases).
For the moment, there is no way to express emotion from Artificial Intelligence. There is limitation in how to interact with a system to do so. The machine can imitate emotion, but no more from that.




Afternoon workshop: Sensors and data.

·         Automotive sector is embedding more sensors into the vehicles. Not only internal sensors are needed, but also environmental ones. It is migrating from just a providing status of a system, into providing a service for the user: postural and biometric information.
·         Main architectural concept is to connect all sensors to a local gateway. This gateway is then connected to the cloud provider via a radio link.
·         Data should be shared, in order to have social, environmental and economic benefits.
·         Considerations on the transmission of data via mobile network depends on the nature of the data:
o    Is it cold or hot data?
o    Is it volatile or is it considered as long-duration information?
o    Should it be stored on-board or in the cloud?
·         All of these questions should be answered depending on the real-time process needs. The main objective is to transform data into services. It has to be useful. That’s why the quality of the data must be good in order to have a “good artificial intelligence”.


Food for the thought:

·         It is not new the consideration to use A.I. to deal with the complexity of networks. Considering the topic of my thesis, it is interesting to consider this approach to enhance the security of network slices. Since the security issue related to slicing is broad and involves several levels according to what problem to tackle, it is necessary to understand not only the architecture but to know where to focus in order to apply this technique properly.
·         A.I. as is well known, can be used as a mechanisms to identify network attacks and prompt with insight on how to stop them according to is fingerprint.
·         Approaches to security can be considered as local (i.e. on the edge) or centralized depending of the data involved, level of the affected infrastructure and scope of attack.
·         Since A.I. solves a particular problem, and we would like to use it to identify multiple security problems, different types of data traffic must be captured, using different scenarios. The more diverse data, the better training information for A.I. engine.
·         Another interesting application could be to have an A.I. “chaos monkey” that could be used to test the A.I. traffic and attach identification system in a network. Would be useful for pen-testing and evaluate the principal algorithm.