Monday, 10 December 2018

TLS v1.3 vs. eTLS


Some months ago, ETSI released the TS103 523-3, which describes the usage of an implementation variant of TLS v1.3 for the Middle Box Security Protocol.

The key complaint is that TLS v1.3 does not allow passive decryption, which is necessary to comply with requests from authorities to have clear text information of an exchange of traffic. This, because TLS v1.3 does not support RSA key exchange and uses ephemeral Diffie-Hellman, instead of static Diffie-Hellman key exchange. These two improvements constitute a high degree of security compared to the previous version of TLS.

I have seen some comments on social networks complaining about the nature of ETSI’s technical specification, which would constitute a downgrade of the security level of the data in transit for the users. And, well, I have some quick thoughts about this.

eTLS features a scheme for longer-lived static Diffie-Hellman keys and allow to re-use the keys across multiple sessions. These characteristics pose high risks, since it would imply to go back to a similar state that we had in TLS v1.2. But if we analyze the use cases under which eTLS is used, I see no difference with common practices in Internet Service Providers (ISP) regarding the routing of encrypted user traffic.

The most common use case is described in clause 4.2.1 (eTLS with enterprise servers) which describes the situation when a customer wants to access some content offered by an ISP.
It describes that TLS v1.3 is used up to the border firewall, and from that point on, eTLS is used between firewall and internal servers.
I find that is a usual practice to break the TLS connection at some point inside the ISP. It may be at the border firewall or in an internal device such as a WAF (Web Application Firewall). This is done to manipulate the user flows in the internal infrastructure. That is the case when it is necessary to perform load balancing between servers. The key idea in here is that the encryption of this data in transit is managed internally by the ISP, secured by certificates between the involved servers. The confidentiality and integrity of the data is not lost and the ISP can have the way to manage the traffic to offer a better service.

In summary, TLS v1.3 would be used over the public Internet (where the most dangerous threats are located), and the less secure eTLS is used inside ISP premises (where one suppose that the threats are reduced).
Now, the worry is about how the ISP is going to manage the static keys (their life cycle management, up to their destruction to guarantee forward secrecy), assure their chain of custody and ensure that there is no abuse of privileges from employees that could use these powerful “keys” to read personal data. This is a topic that relates to the trust that we deposit in our service providers.



Monday, 19 November 2018

AI for evil. #EuroCyberWeek




C&ESAR 2018, day 1

During this week, Rennes will be hosting the European Cyber Week. This is a great opportunity for communication service providers (CSP), manufacturers, industry players and end users to meet and present current research activities about Artificial Intelligence and cyber security.

Opening speeches stressed on the importance on AI for defense, since each day networks and services experience attacks, not only from external actors, but from inside our networks. The idea is not only to disrupt service, but to steal data, jeopardizing research and intellectual property. This constitutes the rise of adversarial AI, and to react to this menace, it is necessary to enhance our understanding and usage of generative adversarial networks.

Other interesting topic is about "opening the black box" in order to have an explanatory AI, which has the goal of explaining the reasoning of the decisions taken by AI. This is important to gain confidence of users and to speed up the adoption of these techniques in various use cases.

Some other topics that caught my attention:

  • The diverse malicious AI techniques to break cyber-security, such as data poisoning. This in order to induce errors on the machine learning model.
  • The bias problem, which most of the times has its source of the training data and leads to erroneous decisions at the end of the AI process.
  • The usage of different frameworks to have behavioral analysis, very useful to detect deviation of usual usage patterns of an entity. This helps to detect compromised entities our users that deviate or abuse of privileges.
I hope day 2 brings new insights and interesting approaches to better understand AI and its challenges.

Saturday, 17 March 2018

InOut2018: summary

Thanks to IMT Atlantique and the approval from my supervisors at my company, I had the opportunity to assist to InOut2018, an event about digital mobility perspectives for cities.

Few important remarks about the keynotes and my point of view about the topics, having  in mind the objectives of my thesis:


From keynote: Artificial Intelligence: Friend or foe? 
By NicolasDemassieux.

·         Besides the definition of a goal and find out a way to reach a goal, that path towards the problem resolution has to be bounded to a specific and delimited scenario.
·         A.I. is designed to do only one task the best way possible. If the problem specification changes or if there is a change of the rules governing the problem, the whole learning process should be restarted again.
·         Real problems of Artificial Intelligence:
o    Errors: what are their nature and what are the consequences of the errors (i.e. false positives / false negatives).
o    Algorithmic bias: The bias is lower if there is a richer data base to learn from.
o    Explanation of actions. At this moment, Artificial intelligence is a black box. There is a trend to build an “explicative AI” in order to have an explanation of the undertaken actions.
“Gold rules” for a reasonable Artificial Intelligence:
·         Information of the purpose and possible debate of it.
·         Involvement of the users.
·         Supervise the algorithms: it would be great to have algorithms that verify other algorithms, for example, to verify there is no bias.
Artificial intelligence can be used for evil and good. It empowers “both sides of the coin”. We must leverage on it to learn how to stop cyberattacks (among other use cases).
For the moment, there is no way to express emotion from Artificial Intelligence. There is limitation in how to interact with a system to do so. The machine can imitate emotion, but no more from that.




Afternoon workshop: Sensors and data.

·         Automotive sector is embedding more sensors into the vehicles. Not only internal sensors are needed, but also environmental ones. It is migrating from just a providing status of a system, into providing a service for the user: postural and biometric information.
·         Main architectural concept is to connect all sensors to a local gateway. This gateway is then connected to the cloud provider via a radio link.
·         Data should be shared, in order to have social, environmental and economic benefits.
·         Considerations on the transmission of data via mobile network depends on the nature of the data:
o    Is it cold or hot data?
o    Is it volatile or is it considered as long-duration information?
o    Should it be stored on-board or in the cloud?
·         All of these questions should be answered depending on the real-time process needs. The main objective is to transform data into services. It has to be useful. That’s why the quality of the data must be good in order to have a “good artificial intelligence”.


Food for the thought:

·         It is not new the consideration to use A.I. to deal with the complexity of networks. Considering the topic of my thesis, it is interesting to consider this approach to enhance the security of network slices. Since the security issue related to slicing is broad and involves several levels according to what problem to tackle, it is necessary to understand not only the architecture but to know where to focus in order to apply this technique properly.
·         A.I. as is well known, can be used as a mechanisms to identify network attacks and prompt with insight on how to stop them according to is fingerprint.
·         Approaches to security can be considered as local (i.e. on the edge) or centralized depending of the data involved, level of the affected infrastructure and scope of attack.
·         Since A.I. solves a particular problem, and we would like to use it to identify multiple security problems, different types of data traffic must be captured, using different scenarios. The more diverse data, the better training information for A.I. engine.
·         Another interesting application could be to have an A.I. “chaos monkey” that could be used to test the A.I. traffic and attach identification system in a network. Would be useful for pen-testing and evaluate the principal algorithm.

Thursday, 1 February 2018

From the Deloitte TMT Predictions session (II)

From the Deloitte TMT Predictions session (I)

Sunday, 21 January 2018

Reflections after 5G Transformer plenary

Last week I had the opportunity to assist to a 5G Transformer plenary. It was hosted by IRT bcom, here in Rennes. This project, which belongs to the European Commission, has the objective to transform today’s mobile transport network into an SDN/NFV-based Mobile Transport and Computing Platform (MTP). With this, incorporate the Network Slicing paradigm into mobile transport networks, empowering the operator to provision and manage MTP slices designed to fulfill specific needs of vertical industries.

This was a great experience, on two planes. On the personal one, it was great to meet people with tons of experience and expertise. Each one with unique points of view about the proposed challenges, with disposition to share and construct knowledge. It was interesting to experience the openness to listen: I had the opportunity to talk with some of them about my thesis and provided valuable feedback and references to explore further. On the technical plane, seems like their problems are also my problems about network slicing, how to orchestrate the resources and abstract correctly the resources for consumers. I also witnessed the importance of the participation of stakeholders like telecom operators and the automotive industry into the plenary, because provide concrete use cases and practical views on the subject. 

 One aspect that caught my attention was the trade-off between the desire to provide a complete architecture (one that looks into the future, that is flexible enough to embrace the use cases we still have not thought of) and the complex task of explaining this architecture to a stakeholder. I sensed that there was a inner desire to avoid complexity and just provide an architecture for a simple scenario, that is easy to support and communicate. I sincerely dislike this approach, since we would be limiting the scope of the architecture to simple use cases. 
Future scenarios will involve intensive mobility management, frequent handover, heterogeneous (access and core) networks spanning through multiple domains and administrative boundaries. Do you need to support and push forward ($$€€) this complex idea to a stakeholder? Call a marketing guy, which I am sure can come up with a business idea that would support the use case. We have to aim higher and try to cover as many scenarios as possible. Make the architecture as flexible and open as possible. This will ensure that all sectors of society are included and that technology will find a way to contribute not only to industries, government, cities, but to benefit people, enhancing its quality of living. We need to focus on humanity.

Or maybe there were other interests behind that I could not grasp at the moment, who knows. In either case, it was a great experience, I learned a lot, and had a view of the complexity of putting a large audience on the same page, the difficult task of persuading people, how to lead a technical discussion and the different methods that can be used to present ideas and technical information.