Kubernetes Deployment¶
Jumpmind provides out-of-the-box support for deploying central office commerce resources into Kubernetes. Jumpmind's support is cloud-platform agnostic, allowing it to run on Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), Red Hat OpenShift, and others. To support this, Jumpmind provides pre-built linux/amd64 and linux/arm64/v8 containers for all of its relevant components. Finally, a Helm chart was developed to ease the burden of Kubernetes configuration.
Preparing Containers¶
Jumpmind provides access to a dedicated retailer container registry where container artifacts can be pulled from. While not strictly required, a retailer is advised to maintain a copy of their artifacts within their local infrastructure. This provides redundancy and allows easier authentication of the registry for Kubernetes. Some artifact registry make it easy to support this by allowing the registry to be configured as a proxy, passing through un-matched requests, and caching the results. Refer to your container registry provider's documentation for more information.
A URL and access key shall be provided to the retailer by Jumpmind's support team. If you are having trouble locating this information, reach out to Jumpmind's support for assistance. Since the information is unique to each retailer, the remainder of this documentation will use an example "ACME Corporation" when referring to a container registry.
The container images can be pulled directly from Jumpmind's retailer container registry using the retailer's provided URL combined with the application identifier, for example using the ACME Corporation as an example: us-east5-docker.pkg.dev/jumpmind-customer-acme/commerce-container-virtual/<application>:<tag>
<application> may be replaced with any one of the following components:
central- Jumpmind's Central application containing tools like Promote, Deployment Management, and Electronic Journalunified-promo-engine- An API wrapper around Jumpmind's Promotion Enginesymds- Jumpmind Commerce redistribution of Jumpmind's SymmetricDS Pro productmetl- Jumpmind Commerce redistribution of Jumpmind's METL productinventory- The Commerce Inventory micro componentclienteling- The Commerce Clienteling micro componentshopkeeper- The Commerce Shopkeeper server applicationstate-relay- gRPC Based Relay service to handle NAT traversals for dealing with bi-directional event streams from stores and Commerce Shopkeepervertex-integration- API server for integrating Vertex tables with Commerce tablespublisher- The Commerce Publisher service dedicated to queuing and publishing data to external systems from Commerce
<tag> may be replaced with the appropriate version, such as 243.0.0. Additional tags also exist to choose different versions of the Commerce components, such as 243.0-latest which as the name suggests, it will point to the latest patch release of 243.0.
Pre-built containers for both Debian Bookworm and Alpine Linux 3.19 are generated, a retailer may choose their preference, however Alpine Linux images do not support the linux/amd64/v8 target due to a limitation of the Java Runtime. You may choose the base OS using the -bookworm or -alpine suffix, for example 243.0.0-bookworm and 243.0.0-alpine. Note that Debian Bookworm is used by default for the 243.0.0 tag.
Point-of-Sale Container¶
The Point-of-Sale container generally needs to be built separately with the retailer customizations included.
In order to work with the Helm chart, the container must be built using directories and such that are expected to exist within the container already. Below is a sample of getting started building a container using Docker or Podman using Jumpmind's expected container layout. Retailer modifications to container are tolerated as long as it the container's layout remains the same.
-
Create a
acme-containerGradle projectmkdir acme-container -
Create a
acme-container/build.gradlefile to include contents that depend on theacme-baseproject and create a distribution:plugins { id 'distribution' } dependencies { implementation project(':acme-base') } distributions { commerce { distributionBaseName = "commerce-container" version = null contents { into ('/content') { from 'content/' } into ('/lib') { from configurations.runtimeClasspath from project.jar } } } } -
Create a
acme-container/logback.xmlconfiguration file with desired contents, here's what other containers will use:<configuration> <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"> <!-- Prefer the format in JSON so that other tools have an easier time scraping and index information (eg. stack traces) --> <encoder class="ch.qos.logback.classic.encoder.JsonEncoder"/> </appender> <!-- Alow OpenTelemetry to see the logs --> <appender name="OTEL" class="io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender"> </appender> <logger name="org" level="WARN" /> <logger name="org.jumpmind" level="INFO" /> <logger name="org.jumpmind.pos.persist.driver" level="WARN" /> <logger name="org.springframework" level="WARN" /> <logger name="org.springframework.web.filter.CommonsRequestLoggingFilter" level="DEBUG" /> <logger name="org.jumpmind.db.alter" level="DEBUG" /> <logger name="org.eclipse.jetty" level="ERROR" /> <logger name="org.jumpmind.symmetric.util.PropertiesFactoryBean" level="ERROR" /> <logger name="org.jumpmind.symmetric.service.impl.ParameterService" level="ERROR" /> <logger name="org.jumpmind.symmetric.db.SqlScript" level="ERROR" /> <logger name="org.springframework.boot.autoconfigure.freemarker" level="ERROR" /> <logger name="org.springdoc.core" level="ERROR" /> <logger name="org.jumpmind.pos.core.service.ClientLogCollectorService" level="DEBUG" /> <logger name="com.dls" level="DEBUG" /> <root level="INFO"> <appender-ref ref="OTEL" /> <appender-ref ref="CONSOLE" /> </root> </configuration> -
Create
acme-container/Dockerfilewith similar contents:ARG BASE_CONTAINER_IMAGE=us-east5-docker.pkg.dev/jumpmind-customer-acme/commerce-container-virtual/commerce-jre:17-bookworm FROM gradle:jdk17-jammy as build COPY . /work/ WORKDIR /work RUN [ "gradle", "acme-container:commerceDistTar", "--no-daemon", "--refresh-dependencies" ] RUN mkdir /work/out && tar -xf /work/acme-container/build/distributions/commerce-container.tar -C /work/out lib FROM $BASE_CONTAINER_IMAGE RUN mkdir -p /jumpmind/tmp && chgrp -R 0 /jumpmind/tmp && chmod -R g=u /jumpmind/tmp && chown -R 1000:0 /jumpmind/tmp RUN mkdir -p /jumpmind/logs && chgrp -R 0 /jumpmind/logs && chmod -R g=u /jumpmind/logs && chown -R 1000:0 /jumpmind/logs RUN mkdir -p /jumpmind/work && chgrp -R 0 /jumpmind/work && chmod -R g=u /jumpmind/work && chown -R 1000:0 /jumpmind/work RUN mkdir -p /jumpmind/snapshots && chgrp -R 0 /jumpmind/snapshots && chmod -R g=u /jumpmind/snapshots && chown -R 1000:0 /jumpmind/snapshots COPY logback.xml /jumpmind/base/conf/logback.xml COPY --from=build /work/out/lib/ /jumpmind/base/lib USER 1000 EXPOSE 6140 WORKDIR /jumpmind ENTRYPOINT [ "java", "-classpath", "/jumpmind/base/lib/*:/jumpmind/extend/lib/*", "-Dspring.config.additional-location=optional:file:/jumpmind/extend/config/", "-Dspring.profiles.active=base,foundation,businessunit,corp,integration,intellij,env,swagger-ui", "-Djava.io.tmpdir=/jumpmind/tmp", "-Dlogging.config=file:/jumpmind/extend/logback.xml", "org.jumpmind.pos.app.Commerce" ] -
Build the container, from the root of the repository run:
docker build -t acme-pos:latest -f ./acme-container/Dockerfile . -
Test by running locally:
docker run --rm -p 6140:6140 acme-pos:latest
Extending a Container¶
In the rare circumstance where a retailer needs to provide supplemental extensions onto an out-of-the-box Commerce container, every container adds the /jumpmind/extend/lib/ directory to its classpath and the /jumpmind/extend/config/ directory to Spring profile's additional location. You may choose to mount relevant files into these directories at runtime or even use the out-of-the-box image as the base for extending and adding files at build time.
Helm chart¶
In combination with the containers, a monolithic Commerce Helm chart may be used to install various needed Commerce applications. The Helm chart is monolithic in nature so that each deployable application can have an understanding of each of the other components being deployed and automatically configure the applications that need to communicate with each other. So for example, if you enable the installation for pos and clienteling, the pos configuration will automatically know you want to use the clienteling microcap and configure itself to expose it through the configured Ingress (if applicable). This is to ease the burden of configuration during deployment and cut down on miss-configuration errors between cloud components.
The Helm chart is distributed using a OCI compatible registry and can be accessed through the Helm CLI or various other supporting tools such as FluxCD or Argo CD. Using the "ACME Corporation" as example retailer, we can authenticate the Helm CLI using the following command where $JUMPMIND_REGISTRY_AUTH_KEY is an environment variable containing the Jumpmind supplied authentication key:
helm registry login https://us-east5-docker.pkg.dev -u "_json_key_base64" -p "$JUMPMIND_REGISTRY_AUTH_KEY"
Once authenticated you can use use the helm commands in whatever manner is relevant to the scenario. For example, to initially install Commerce like so:
helm install example oci://us-east5-docker.pkg.dev/jumpmind-customer-acme/commerce-helm-virtual/commerce --version 243.0.0 -f ./prod-helm-values.yaml
In this examples we supply alternative Helm values that are layered on top of Helm chart's default supplied values. You will typically always need to supply alternative values that meets your environment's criteria. The Helm values are broken up into sections that are delineated by their application name. Each application section is almost identical aside for applications that have additional configurations option. A basic example might look like this:
# ./prod-helm-values.yaml
pos:
enabled: true
replicaCount: 3
image:
# override to use internal ACME registry
repository: acme-internal-registry.acr.io/pos
tag: 243.0.0-acme-build-1
# tune the workload for production use cases.
resources:
requests:
cpu: 2000m
memory: 3Gi
limits:
cpu: 4000m
memory: 4Gi
# use the following Spring Profiles to run the application -- various others
# might get added automatically
additionalSpringProfiles: acme,prod
# You may supply Commerce configuration on the fly during deployment. This
# will be automatically loaded near the end of the Spring Profile chain.
additionalApplicationConfiguration:
openpos:
example:
config: true
symds:
enabled: true
repository:
# in this example, the internal registry is a proxy for Jumpmind's distribution registry
repository: acme-internal-jumpmind-proxy.acr.io/symds
tag: "243.0.0"
You may refer to the Helm Values page for an exhaustive list of possible values in helm.