StateLB¶
In some installations of Jumpmind Commerce products, you might need to serve a stateful connection to many different stores from a single server. This is common in failover, thin deployments, and microcapabilities. As more and more stores have a live connection to the server, the server will see increased load and eventually reach a point of service degradation.
For operations staff, it will be desirable to horizontally scale the Commerce server by introducing multiple replicas of the server and allow the load balancer to distribute the load to each of the replicas. However, since the connection is stateful, this may lead to undesirable effects, such as the client device having to reconnect which will likely have it reconnect to a different server and its state is lost. The obvious solution is to enable session affinity/sticky sessions with the load balancer implementation, however, there are some non-obvious problems that will still exist in the system. Typical implementations of session affinity in load balancers is cookie based. A cookie is issued by the load balancer and the browser continues to send that cookie with every request. The cookie describes the server to route to. Microcapability applications are likely to be served from a different host, and for security reasons, the browser will not allow the cookie to be sent to a separate host and its difficult to impossible to chance the security constraints on the cookie in most load balancer implementations. Additionally, for the main Point-of-Sale application, when there is a CX Connect device expected, both of the client connections must be made to the same server. When a load balancer routes with session affinity, it does not understand this behavior, so it continues to route at random and thus it is unlikely for both the CX Connect and Point-of-Sale to connect to the same server.
To assist with these limitations, Jumpmind offers a lightweight L7 proxy that you deploy in-line with your existing load balancer and Jumpmind Commerce applications. The proxy is stateless and understands the stateful portions of Jumpmind Commerce applications to provide session affinity. While designed as a Kubernetes native application, it can also be run on its own with both static and service discovery options. Additionally, since the proxy is stateless, it itself can horizontally scale without limitations in addition to allowing the downstream Commerce Application to scale horizontally.
flowchart LR
client["Client"] --> lb["Load Balancer"]
lb --> statelb-a["StateLB (Replica N*)"]
statelb-a --> commerce-a["Commerce (Replica N*)"]
StateLB works by proxying HTTP and the Commerce WebSocket requests to a Commerce Server. Most commerce HTTP requests are stateless REST calls. The proxy will perform a standard round-robin distribution of these requests. Stateful requests, including the Commerce WebSocket, have the device ID supplied as part of the protocol. StateLB infers the store ID from the device ID, computes a deterministic hash of the store ID which as a result gives a deterministic ID of the targeted downstream Commerce Server. The store ID is used so that all devices from a single store are routed to the same downstream Commerce server; most importantly, CX Connect devices.
Limitations¶
Due to StateLB being a stateless solution that relies on a math formula where one of the inputs is the number of downstream Commerce Servers, it means the results of StateLB are sensitive to changes in the number of downstream Commerce servers. While a retailer should be able to get away with occasional scaling changes, even during live operation, it is not recommended to make very frequent changes to the downstream scaling like may be seen when a Horizontal Pod Autoscaler (HPA) is in use.
Enable with Helm¶
StateLB is included in the Jumpmind Commerce provided Helm Chart. A simple enabling of StateLB for expected services is all that's needed.
You can review the pos.loadBalancer, clienteling.loadBalancer, and inventory.loadBalancer sections of the Helm Values.
When a <app>.loadBalancer.enabled is true, a StateLB deployment is made and is configured with a downstream of the respective <app>. By default the Helm chart will use the recommended method for Kubernetes API service discovery. This requires a ServiceAccount and Role with permissions to read Service and Endpoint resources from the Kubernetes API. The Helm chart will automatically create these resources for you. However in rare circumstances where a restricted environment prevents the creation of a Role or RoleBinding, you may try using the DNS method of service discovery by setting the <app>.loadBalancer.mode to dns.
Example Helm Values that enables StateLB for Point-of-Sale
pos:
enabled: true
loadBalancer:
enabled: true
Kubernetes (Without Helm)¶
In cases where the recommended Helm Chart is not being used, you may deploy to Kubernetes by filling out the parameters applying the following resources:
apiVersion: v1
kind: ServiceAccount
metadata:
name: commerce-<APP>-statelb
---
# All StateLB deployments can share the same Role, only need to make once.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: commerce-statelb
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- watch
- list
---
# A single RoleBinding can bind to multiple ServiceAccounts which are listed under the `subjects` field.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: commerce-statelb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: commerce-statelb
subjects:
# List all the ServiceAccounts for each `<APP>` here.
- kind: ServiceAccount
name: commerce-<APP>-statelb
---
apiVersion: v1
kind: ConfigMap
metadata:
name: commerce-<APP>-statelb
data:
config.toml: |
[service_discovery.kubernetes_service]
name = "<SERVICE_NAME_FOR_APP>"
port = "http"
[route]
device_affinity_prefixes = [
"/api/appId/pos/deviceId",
"/api/app/pos/node",
"/api/app/customerdisplay/node",
"/api/appId/customerdisplay/deviceId",
]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: commerce-<APP>-statelb
spec:
replicas: 3
selector:
matchLabels:
app: <APP>-statelb
template:
metadata:
labels:
app: <APP>-statelb
spec:
automountServiceAccountToken: true
serviceAccountName: commerce-<APP>-statelb
containers:
# Note: Replace with location of your image and version combination
- image: us-east5-docker.pkg.dev/jumpmind-customer-acme/commerce-container-virtual/statelb:243.5.0
name: statelb
ports:
- containerPort: 7900
name: http
protocol: TCP
resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 500m
memory: 128Mi
volumeMounts:
- mountPath: /etc/config/commerce_statelb/
name: config
volumes:
- name: config
configMap:
defaultMode: 420
name: commerce-<APP>-statelb
---
apiVersion: v1
kind: Service
metadata:
name: commerce-<APP>-statelb
spec:
type: ClusterIP
selector:
app: <APP>-statelb
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
Once the StateLB service is running, you may then simply configure your existing Ingress that is configured with the <APP> Service as its downstream to instead point at the commerce-<APP>-statelb service.