Connecting to Token
Connecting to the Token Platform often means rapidly preparing your environment for integration, and a fast-tracked implementation may lead to common fundamentals being overlooked. At Token, we're proud proponents of site reliability engineering and firm believers in sharing tools and knowledge. Hence, should you require assistance with any aspect of the information in the preceding topics or those that follow here, please contact our support team. We'll strive to help resolve any issues you encounter.
In the meantime, let's take a brief look at potential connectivity pain points with an eye to avoiding unnecessary server-to-server communication issues in terms of:
Click one of the links above to jump to an area of interest.
A protocol is a set of rules that govern the data communication mechanisms between clients (for example web browsers used by internet users to request information) and servers (the machines containing the requested information).
Protocols usually consist of three main parts: Header, Payload and Footer. The Header, placed before the Payload, contains information including source and destination addresses as well as other details (such as size and type) regarding the Payload. Payload is the actual information transmitted using the protocol. The Footer follows the Payload and works as a control field to route client-server requests to the intended recipients along with the Header to ensure the Payload data is transmitted free of errors.
Token leverages the gRPCOpen source remote procedure call (RPC) framework that can run anywhere. It enables client and server applications to communicate transparently, and makes it easier to build connected systems. gRPC uses protocol buffers, Google's mature open source mechanism for serializing structured data — think XML, but smaller, faster, and simpler. protocol to exchange protocol buffers between server applications (our clients' applications) and client applications (Token CloudFunctionality, data and resources running on physical and virtual servers maintained and controlled by Token, and accessed via an Internet connection.). gRPC permits a client application to directly call a method on a remote server application as if it were a local object, making it simple to create distributed services (which is effectively what the Token Cloud is — a massive, distributed application linking banks to TPPs and users).
In order to operate in the manner required, gRPC makes use of HTTP/2Extends HTTP and centers around three qualities rarely associated with a single network protocol without necessitating additional networking technologies – simplicity, high performance and robustness. These goals are achieved by introducing capabilities that reduce latency in processing browser requests with techniques such as multiplexing, compression, request prioritization and server push. for its binary framing and compression capabilities, as well as HTTP/2's native support for connection multiplexing. This can create some minor complications in environments not accustomed to serving HTTP/2, but nothing that cannot be easily overcome.
Load balancers at the cloud or network edge are typically the first device to receive a connection from a source that seeks to connect with a destination within an organisation's environment. With respect to the Token Cloud, this entails connecting to the SDK operated by our client. However, not all load balancers are created equal. Some have full support for HTTP/2, some have only partial support, and others simply refuse to accept HTTP/2 altogether. Understanding the difference and why it exists is key to a successful implementation.
Google Cloud load balancers fully support HTTP/2 and merely require creating a load balancer, pointing a public IP address at its outside interface, then configuring a listener pool to receive traffic. Seeing that Google drove the initial development of gRPC, it's no surprise that its load balancers include native L7Application layer nearest to the end user. The user and the application are directly interacting, communicating with both. support for gRPC and HTTP/2.
Within Amazon Web Services (AWS), the waters begin to muddy somewhat. A network load balancer with TCP listeners is fully compatible because it's an L4Transport layer for transmission of data between points on a network. load balancer that distributes TCPTransmission Control Protocol – standard that defines how to establish and maintain a network conversation through which application programs can exchange data. TCP works with the Internet Protocol (IP), which defines how computers send packets of data to each other. Together, TCP and IP are the basic rules defining the Internet. The Internet Engineering Task Force (IETF) defines TCP in the Request for Comment (RFC) standards document number 793. connections to hosts within the associated target group. A network load balancer becomes incompatible the moment a TLS listener is provisioned, since it effectively makes the load balancer intercept TLSTransport Layer Security – a widely adopted security protocol designed to facilitate privacy and data security for communications over the Internet. A primary use case of TLS is encrypting the communication between web applications and servers. negotiation and pass a non-comforming connect to the application within the target group. Thus, when using an AWS NLBNetwork Load Balancer – functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration., you simply create TCP listeners and never worry about your load balancer again.
An AWS application load balancer is a different beast. An L7 load balancer, its interest is squarely in application protocols. AWS documentation says HTTP/2 is supported and that it will handle up to 128 requests in parallel using a single HTTP/2 connection. Moreover, a comparison of AWS balancers would lead one to believe that ALBApplication Load Balancer – functions at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. After the load balancer receives a request, it evaluates the listener rules in priority order to determine which rule to apply, and then selects a target from the target group for the rule action. You can configure listener rules to route requests to different target groups based on the content of the application traffic. Routing is performed independently for each target group, even when a target is registered with multiple target groups. You can configure the routing algorithm used at the target group level. The default routing algorithm is round robin; alternatively, you can specify the least outstanding requests routing algorithm. support for HTTP/2 is better than that of an NLB. Nonetheless, buyer beware. While your ALB will accept HTTP/2 on its outside interface, it will silently flip those connections to HTTP/1.1 when it forwards the connection on applications within the target group. This means your SDK integration will start throwing ALPNApplication-Layer Protocol Negotiation – a TLS extension that includes the protocol negotiation within the exchange of hello messages. ALPN is able to negotiate which protocol should be handled over a secure connection in a way that is more efficient and avoids additional round trips. The ever-growing in popularity HTTP/2 protocol, makes use of ALPN to further decrease website load times and encrypt connections faster. errors at best or nothing to indicate the problem at worst.
If you are running your services on Azure, look to their layer 4 Azure Load Balancer, which supports load balancing TCP and UDPUser Datagram Protocol – a transport layer protocol that is used to create a connection between applications running on hosts that are connected via a network. connections, meaning that it will not interfere with the higher level protocols used by gRPC.
The takeaway here is that your mileage may indeed vary when it comes to your load balancer's support for HTTP/2. It is worth taking a look at your load balancer documentation to see if HTTP/2 is supported, and if so, how and to what extent? Many devices, like an AWS ALB, will accept an inbound HTTP/2 connection and pass on HTTP/1.1 connections to any target for which it is configured to pass traffic. WAFWeb Application Firewall – helps protect web applications by filtering and monitoring HTTP traffic between a web application and the Internet. It typically protects web applications from attacks such as cross-site forgery, cross-site-scripting (XSS), file inclusion, and SQL injection, among others. A WAF is a protocol layer 7 defense (in the OSI model), and is not designed to defend against all types of attacks. This method of attack mitigation is usually part of a suite of tools which together create a holistic defense against a range of attack vectors. devices, such as Barracuda's WAF, are an example of this limitation.
In environments where a proxy server is mandated for outbound connections to services on the Internet, Token SDKs support common proxy directives as environment variables or parameters passed to the runtime. HTTP_CONNECT proxies are supported in gRPC by default. Use the following settings to support your respective SDK.
Java SDK Proxy Support
To support proxy servers from the Java SDK, pass these flags to the JVMJava Virtual Machine – a program that executes other programs. It has two primary functions: to allow Java programs to run on any device or operating system (known as the "Write once, run anywhere" principle), and to manage and optimize program memory.:
To bypass the proxy for addresses your application may need to support, add:
C# SDK Support
To have your Token C# SDK-based application honour your network's proxy settings, set the following environment variables:
To bypass the proxy for addresses your application may need to support, add:
The gRPC core libraries support common proxy variables in the following forms:
To bypass the proxy for addresses your application may need to support:
Token uses Mutual TLS (mTLSMutual Transport Layer Security – common security practice that uses client TLS certificates to provide an additional layer of protection, allowing it to cryptographically verify the client information.) to authenticate sessions between Token Cloud and servers built with our SDKs. In brief, mTLS is invoked when both the server (the application built with the SDK) and the client (Token Cloud) present certificates to validate their identity. Once TLS is mutually verified, communication between the two entities can commence.
Within the application integrating with the SDK, a certificate and key must be provided. These sample applications demonstrate the structure:
Note: These will live in the config/tls directory within your project root.
To configure your integrated SDK application for mTLS:
- Generate a cert.pem and a key.pem
- Share the cert.pem with Token via a support ticket (https://support.token.io)
- Let Token upload the cert to our trust store.
Within the config/tls directory, you will find a file called trusted-certs.pem. This contains the root CA, intermediate and leaf certificates for Token's sandbox and production environments (api-gprc.sandbox.token.io and api-grpc.token.io, respectively). These are the certificates that Token will present when connecting to the application built with the SDK.
Token provides a script for creating self-signed certificates for use with mTLS. This script creates a private key and a self-signed certificate that is suitable for mTLS communication between entities, and is the lowest friction approach to implementing mTLS. The script will replace the cert.pem and key.pem files packaged with the sample applications.
Third-party Authority Certificates
If you wish to use a certificate from a third party certificate authority, you are more than welcome to do so; in which case, key.pem is replaced with the output of the key created during CSRA CSR or Certificate Signing Request is a block of encoded text that is given to a Certificate Authority when applying for an SSL Certificate. It is usually generated on the server where the certificate will be installed and contains information that will be included in the certificate such as the organization name, common name (domain name), locality, and country. It also contains the public key that will be included in the certificate. A private key is created at the same time to make a key pair. A CSR is generally encoded using ASN.1 according to the PKCS #10 specification. A certificate authority will use a CSR to create your SSL certificate, but it does not need your private key. You need to keep your private key secret. The certificate created with a particular CSR will only work with the private key that was generated with it. So if you lose the private key, the certificate will no longer work., and cert.pem is replaced with the certificate returned to you by your certificate authority. If your CAA certificate authority (CA), also sometimes referred to as a certification authority, is a company or organization that acts to validate the identities of entities (such as websites, email addresses, companies, or individual persons) and bind them to cryptographic keys through the issuance of electronic documents known as digital certificates. is not a root CA, you will need to ensure that you send your certificate bundle to Token, in addition to cert.pem, so that we have your full certificate chain and can validate your leaf certificate (cert.pem) against the root and intermediate certificates provided with the bundle. All certificate issuers include instructions on accessing the bundle within their proprietary documentation.
* * * *
With these fundamentals in mind, you're ready to explore onboarding with Token using our REST API for TPPs.