Friday, December 2, 2022

Securing Data in Transit

Overview

Encryption in transit is a process of encrypting data as it travels from one point to another so that it cannot be read or tampered with by anyone other than the intended recipient. This can be especially important when third-party networks may not be trustworthy. To ensure the security of data in transit, the fundamental principles of encryption in transit are confidentiality, integrity, availability, zero trust, and perfect forward secrecy.

Confidentiality means that only authorized parties can access the data. Integrity means that the data cannot be modified without detection. Availability means that the data is always accessible to authorized parties. Zero trust means that no one can be trusted implicitly. Perfect forward secrecy means that even if the encryption keys are compromised, the data remains safe.

These principles are essential for ensuring the security of data in transit.

 

Scope

Access

Perfect Forward Secrecy

Applicability

VPN

Organization

Network to Network

Yes

General access to mulitple resources within networks

mTLS

Domain, application, devices and user level

Authentication and Authorization

Yes

Between two parties for access to an application, or device

SSL/TLS

Domain

Browser

No

Applications and protocols, legacy systems

Common Methods

Zero Trust

Zero trust is a security architecture that does not rely on predefined trust levels. One of the most important aspects is verifying the identities of users, devices, and servers. This prevents unauthorized third party intercepts and alters communications between two parties (a.k.a., man-in-the-middle attacks.) When combined with other security measures, such as access control, guaranteeing identities can provide a strong layer of protection for data and communication networks.

Perfect Forward Secrecy

Perfect forward secrecy (PFS) is a cryptographic technique that ensures that messages cannot be decrypted even if an attacker obtains the private key. If PFS is used, each message is encrypted with a different key, so even if one key is compromised, the others remain secure. This makes it much more difficult for an attacker to decrypt all of the messages.

Even if an attacker obtains the server's private key, they would only be able to decrypt traffic from the specific session that they have the key for.

VPNs provide a similar level of security, but they do so by encrypting all traffic with a single shared key. If this key is compromised, all past and future traffic can be decrypted and read by an attacker. This is in contrast to SSL/TLS, which uses the same private key for all sessions.

PFS is therefore more secure than VPNs or SSL/TLS, as it makes it much harder for an attacker to compromise communications. It also ensures that even if one session or the private key is compromised, future and past communications cannot be decrypted.

By adhering to these principles, data can be securely transmitted between parties without the fear of it being compromised.

Mutual TLS

Mutual TLS (mTLS) is a culmination of the aforementioned fundamental principles by ensuring that only those authorized to access or make changes to data can do so. mTLS provides a higher level of security from interception and tampering. mTLS can be used to secure communication between any two parties, but it is particularly well-suited for securing communication between devices and servers. mTLS can help to protect against a wide range of attacks, including man-in-the-middle attacks, and eavesdropping.

mTLS Architecture

In practice, use mTLS in the following ways.

Identify and Authorize

  • Connecting devices to the company’s network resources

  • Content delivery network/cloud security services into back-end servers

  • External party's devices or applications to the company’s network resources

  • APIs used in business-to-business (B2B) data transfers

  • Microservice architectures where each microservice must guarantee that every component with which it interfaces is legitimate and unmodified

  • Sensors connected to the Internet of Things (IoT)

Connecting Cloud Services and On-Premises Servers

The following use case, for example, is the use of mTLS to secure communications between e-commerce platforms that use databases (e.g., Content Delivery Network (CDN), Storage Accounts, S3 Bucket, Event Hubs) to prevent phishing or ransomware attacks. By requiring both the database and the origin server to authenticate to one another, it eliminates the possibility of interception points that could enable an attacker to distribute unwanted content to clients. This can be a particularly important security measure for e-commerce platforms that use, for example, a CDN to provide website content.

Service Mesh Environments

Microservices architecture divides application components into services that run across servers, commonly inside containers, on several modernized systems. These services must communicate through the network to transmit data, while traditional monolithic applications conduct all communication in memory. Although ordinary TLS can encrypt communication among microservices, it still creates the opportunity to intercept communication from the exposed services. Application developers can control service-to-service traffic by using a service mesh. The service mesh can consolidate microservice administration and control both endpoints of the connection by encrypting and authenticating data with mTLS.

When it comes to authentication in a service mesh environment, using mTLS has several advantages. For one, it eliminates the need to provision user accounts separately for each service. This can save a significant level of effort by Identity Teams in creating, provisioning, and managing individual or shared accounts. Additionally, mTLS provides a higher level of security versus user-based authentication methods, making it ideal for sensitive data and transactions. Finally, mTLS is often more scalable than other authentication methods, which can support multiple users and services without running into performance issues.

Legacy or resource constraints

Using Mutual TLS (mTLS) on legacy systems that are out of date or not capable of supporting the resource requirements for decryption will need to use a proxy for mTLS. The most common option is to use a reverse proxy (a.k.a., firewall or load balancer) which will act as a go-between for the legacy system and the requestor or client. This type of proxy can be configured to decrypt incoming traffic and then re-encrypt it with a lessor encryption form factor (i.e., SSL/TLS) before sending it to the legacy system.

Another option is to use a forward proxy (a.k.a., API Gateway, Streaming Event, Message Queue, and Load Balancer). This type of proxy sits in front of the legacy system and encrypts outgoing traffic before sending it to its destination. Forward proxies can also be used to decrypt incoming traffic.
Finally, you can also use a combination of both reverse and forward proxies. This allows you to have the best of both worlds, with the ability to decrypt and encrypt traffic as needed while still providing authentication, authorization, and rate-limiting capabilities.

Even though both reverse and forward proxies can encrypt and decrypt traffic, they should be used sparingly when mutual TLS is not supported. Or when the data handling does not support the need for this level of security, such as publicly available information, and it is not directly accessible externally to any parties or on the Internet.

Legacy Systems

Legacy systems often do not support mTLS (mutual TLS) because the systems rely on outdated protocols and technologies.

TLS passthrough can be used to provide mTLS communication between clients and servers that use different TLS implementations. When using TLS passthrough, it is important to ensure that the certificates that are being passed between the client and server are properly verified. Otherwise, an attacker could intercept the communication and impersonate either the client or the server.

To verify certificates when using TLS passthrough, you can use a tool such as SSLsplit. SSLsplit is a transparent TLS proxy that terminates connections from clients and forwards them to legacy servers. It can be configured to verify the certificates that are presented by both clients and servers, and to log any errors that occur during verification.

For example, to terminate a connection to a legacy system that only supports one-way authentication TLS, you would use the following command:

sslsplit -k <legacy_system_key> -c <legacy_system_cert> -D <destination_ip>:<destination_port> <source_ip>:<source_port>

Replace <legacy_system_key> and <legacy_system_cert> with the key and certificate for the legacy system, respectively. Replace <destination_ip> and <destination_port> with the IP address and port of the legacy system. Finally, replace <source_ip> and <source_port> with the IP address and port of the client that is trying to connect to the legacy system.

This will cause SSLsplit to terminate the mTLS connection from the client and passes the decrypted traffic to the legacy system. The client will be unaware that its connection has been terminated and will continue to communicate with the legacy system as if nothing has happened.

Setup SSLsplit

SSLsplit needs to be installed on the same machine as the legacy system. If the legacy system is not on the same machine, then SSLsplit can be used to redirect traffic to the legacy system. This can be done by using a tool like iptables or socat. For example, to redirect traffic from port 443 on the local machine to port 8443 on the legacy system, you would use the following command:

iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination <legacy_system_ip>:8443

Replace <legacy_system_ip> with the IP address of the legacy system. This will cause all traffic that is sent to port 443 on the local machine to be redirected to port 8443 on the legacy system.

SSLsplit can also be used to terminate SSL/TLS connections that are using client certificates. To do so, simply specify the -C option when starting SSLsplit. This will cause SSLsplit to prompt for a password whenever a client certificate is encountered. The password will be used to decrypt the client certificate so that it can be inspected.

When running SSLsplit on a legacy system that can't support ciphers greater than TLS 1.0, you'll need to use the -T option to disable SSL/TLS version negotiation and force SSLsplit to connect using TLS 1.0 only. This will of course reduce the security of your connection, so it's only recommended as a last resort.

Management of Digital Certificates

To support mTLS, Sleep Number should consider using a tool like Certbot to automatically generate, renew and manage the digital certificates. Certbot is a free and open-source tool that makes it easy to obtain and install digital certificates from a trusted certificate authority such as Let’s Encrypt CA.

Certbot mTLS is an extension to Certbot that allows you to secure externally accessible services such as a website or API with both HTTPS and TLS-SNI encryption. TLS-SNI is a newer form of encryption that is not yet supported by all web browsers, but it offers better security than HTTPS alone.

To use Certbot mTLS, you will need to install the Certbot client onto the server. The Certbot client will then automatically generate a certificate and configure the server to use it.

If you already have a certificate for your website, you can use Certbot mTLS to secure your site with TLS-SNI encryption by adding the --tls-sni-01 flag to the certbot command. For example:

1certbot --tls-sni-01 -d sleepnumber.com

If you do not have a certificate for your website yet, you can use Certbot to generate one. To do this, you will need to use the --standalone or --webroot flags. For example:

1certbot --standalone -d sleepnumber.com

Certbot will then generate a certificate and configure your web server to use it. You can also use the --force-renewal flag to force Certbot to renew your certificate before it expires.

You can learn more about Certbot mTLS by reading the documentation in .

No comments:

Post a Comment