Software engineering, problem solving, nerd stuff

Using Atlassian OpsGenie with a localized on-premises Jira instance

We are currently in the process of migrating our alerting infrastructure from OMD to Atlassian’s OpsGenie. Most of the features (SMS, phone call etc.) worked out of the box but we struggled with pushing alerts back into our on-premises Jira instance.

Enable logging of POST requests

OpsGenie does not provide debug logs of all executed HTTP requests against Jira’s REST API. Instead, only the very generic HTTP status code is provided like

Lucky for us, our Jira instance is running behind an Apache HTTPD webserver acting as a proxy. With help of the mod_security module we were able to trace the communication between OpsGenie and our Jira instance:

LoadModule security2_module modules/

<VirtualHost *:443>
# ...
    <IfModule mod_security2.c>
       SecRuleEngine On
       SecAuditEngine On
       SecAuditLog /var/log/httpd/modsec_audit.log
       SecRequestBodyAccess on
       SecAuditLogParts ABIJDFHZ

Configuring the Jira worfklow schema

OpsGenie requires you to have atleast a workflow with the following status transitions:

  • TODO/Open -> In Progress
  • In Progress -> Resolved

It is important, that

  1. the statusses are named exactly as “Resolved” and “In Progress” as OpsGenie’s internal Jira connector is case-sensitive
  2. you are not confusing the status name with the status category

In English-based Jira installations, this should not be an issue but in our localized German environment, we had to add both statusses to Vorgänge > Status and add its English translation to the status:

To check the correct names, you can access the REST API of your Jira instance like https://jira/rest/api/2/issue/${OPSGENIE_PROJECT}/${ISSUE_ID}/transitions. The transitions[] field inside the JSON response must match the statusses above, like

				"description":"Der Vorgang wird aktuell nicht bearbeitet und wurde noch nicht vollständig fertig gestellt.",

After we had configured the workflow schema, OpsGenie was able to create issues and transitions them to the In Progress status.

Configuring the screen mask for solving open alerts

When we tried to close an open alert in OpsGenie, Jira failed with the HTTP 400 error described above. In our mod_security logs we saw the following output:

POST /rest/api/2/issue/${ISSUE_KEY}/transitions HTTP/1.1
Accept: text/plain, application/json, application/*+json, */*
Content-Type: application/json;charset=UTF-8
Accept-Encoding: gzip,deflate

HTTP/1.1 400 Bad Request

The 61 pointed to the transition from In Progress to Resolved but its screen mask was obviously missing the “resolution” field. You can easily check the fields for a transition by accessing the issues’s transition configuration: https://jira/rest/api/2/issue/${ISSUE_KEY}/transitions?61&expand=transitions.fields.
We added the missing Lösung (Solution) field to the screen mask of the transition but the error still occurred.

Translating the “Solution” field

Again, the solution fields have to be translated so that is called “Done” and not “Fertig”. You can change the translations at https://jira/secure/admin/ViewTranslations!default.jspa?issueConstantType=resolution.

In the end, everything is working and OpsGenie is now able to create issues and move them through the expected statusses/transitions.

Receiving “ Not Found” when using Jenkins’ pipeline-aws-plugin and s3Upload step with Minio

I am currently working on a Jenkins declarative pipeline to connect the Jenkins builds with Kubernetes, Helm and Netflix Spinnaker. One of TODOs has been to deploy different artifacts (e.g. a helm chart my-chart-0.0.1.tar.gz) to an AWS S3-compatible bucket inside a Minio installation with help of pipeline-aws-plugin.

When running

withAWS(endpointUrl: 'https://minio.domain.tld', credentials: config.credentialsId) {
	s3Upload(file: "my-file.txt", bucket: "my-bucket")				

my pipeline always threw an exception with Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(
	at com.amazonaws.http.AmazonHttpClient.execute(

Trying other clients with Minio

At first I suspected some misconfiguration of my Minio installation. I checked the S3 upload with mc and AWS’ own CLI. Both worked flawlessly so it had to be something else.

Enable logging

To get some more debugging output, I configured Jenkins’ to log events for com.amazonaws and org.apache.http.wire. The debugging output does not show up inside the build job’s console output but under the configured logger.

Host-style access to S3 buckets

After scanning the debug output, I noticed the following:

http-outgoing-11 >> "PUT /my-file.txt HTTP/1.1[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 >> "Host: my-bucket.minio.domain.tld[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 >> "x-amz-content-sha256: UNSIGNED-PAYLOAD[\r][\n]"
http-outgoing-11 << "[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 << "default backend - 404"
Jan 21, 2019 9:36:15 PM FINE createException
Failed in parsing the error response : default backend - 404

When pipeline-aws-plugin initiates a request to my bucket, it does not https://minio.domain.tld/my-bucket but https://my-bucket.minio.domain.tld. This is totally fine for AWS S3 buckets. But with the Minio deployment in our Kubernetes cluster, this does not work out of the box:

  1. By default, our Minio deployment does not use the –address parameter described in
  2. Our Minio ingress does also not listen to 4th level domains like my-bucket.minio.domain.tld so that the nginx proxy returns the “default backend – 404” string seen in the log output above.

Solving the issue

Instead of configuring host-style access I fixed it by simply using the pathStyleAccessEnabled: true in my s3Upload step. When enabled, pipeline-aws-plugin does not use the bucket name as 4th level subdomain but appends the bucket name to the host name:

withAWS(endpointUrl: 'https://minio.domain.tld', credentials: config.credentialsId) {
	s3Upload(pathStyleAccessEnabled: true, file: "my-file.txt", bucket: "my-bucket")				

Running a Spring Boot JAR service with SELinux enabled

Just a quick reminder how to run a Spring Boot JAR (or any other self JAR executable) with SELinux enabled:

chcon --type=java_exec_t /opt/myapp/spring-boot-app.jar

To make this persistent you have to use the bin_exec_t type as java_exec_t is just an alias:

# apply the bin_exec_t
semanage fcontext -a -t bin_exec_t /opt/myapp/spring-boot-app.jar
# restore SELinux contexts
restorecon -R /opt/myapp

ll -Z /opt/myapp
# should look like
# -rwxr-xr-x. 1 myapp myapp unconfined_u:object_r:bin_t:s0 26500195 Aug 28 08:34 myapp.jar

To let systemd start this service, you have to create a systemd unit file at /etc/systemd/system/myapp.service:

Description=My Spring Boot application



And don’t forget to add the service user, reload the systemd services and enable the myapp.service.

Using IPv6 with AWS Application Load Balancer (ALB)

Today I struggled an hour or so to access a AWS hosted web application through IPv6. Just follow these rules:

  • Get an IPv6 CIDR for your VPC: Go to VPC > Your VPCs > ${YOUR_VPC} > Edit CIDRs > Add IPv6 CIDR. The IPv6 CIDR is automatically choosen by AWS. You can’t configure the IPv6 CIDR on your own.
  • For the subnet(s) your ALB is located in, you have to allocate an IPv6 subnet from your previously generated IPv6 subnet. Go to VPC > Subnets > ${YOUR_ALB_SUBNETS} > Edit IPv6 CIDRs > Add IPv6 CIDR. You can have 255 IPv6 subnets.
  • You have to add any IPv6 destination to your routing table. In VPC > Route Tables > ${YOUR_ROUTING_TABLE} > Routes > Edit add “Destination=::/0” and “Target=${YOUR_IGW_ID}” as routing table entry. This was, BTW, the part I had forgotten.
  • Enable dualstack for your ALB. Go to EC2 > Load Balancers > ${YOUR_APPLICATION_LOAD_BALANCER} > Edit IP address type and select dualstack. The option is only available if your subnets have been previously configured with IPv6 CIDRs.
  • Your load balancer’s security group must allow HTTP and/or HTTPS traffic through IPv6. Go to EC2 > Security Groups > ${YOUR_APPLICATION_LOAD_BALANCERS_SECURITY_GROUP} and add the inbound and outbound rules “Protocol=TCP,Port Range=80, Source=::/0” and/or “Protocol=TCP,Port Range=443, Source|Destination=::/0”.

As soon as you have enabled dualstack mode, for the ALB, AWS propagates a new AAAA DNS record for the load balancer. This takes a few minutes. You can access the load balancer with IPv6 by using the load balancer’s IPv4 CNAME. The load balancer itself forwards HTTP requests to the backend servers over IPv4. EC2 instances do not need a IPv4 address on their own.

How to programtically insert versionized initial data into Spring Boot applications

One of the common required tasks for an application using a persistence store is to initialize the underlying database with basic data sets. Most of the time this contains something like admin users or default roles.

Setting the stage

To give a proper example, we have the database table role with two columns id (primary key) as an internal ID and uuid (primary key) as an external key.
In Liquibase, our changeset for this table has the following definition:

	<changeSet author="schakko" id="schema-core">
		<createTable tableName="role">
			<column name="id" type="BIGSERIAL" autoIncrement="true">
				<constraints nullable="false" primaryKey="true" unique="true"
					uniqueConstraintName="unq_role_id" />
			<column name="uuid" type="UUID">
				<constraints nullable="false" primaryKey="true" unique="true"
					uniqueConstraintName="unq_role_uuid" />
			<column name="name" type="varchar(255)">
				<constraints nullable="false" unique="true" />

My requirements are:

  • I want to add multiple custom roles into this table
  • The uuid field must be randomly generated
  • The schema definition must work on H2 and PostgreSQL without the uuid-ossp module. Our application backend is responsible for the generation of UUIDs.

Initializing databases with Spring Boot’s native features

With Java, specifically Spring Boot, there are two ways to initialize the database:

  1. Hibernate, and therefore Spring Boot with JPA, checks for a file named import.sql in the root of the classpath. This file is executed on startup when Hibernate creates the schema.
  2. The file data.sql, respectively data-${platform}.sql for concrete DBMS’, are used for importing SQL data by using the pure JDBC datasource without using any JPA stuff.

For simple tasks, both options are feasible. But in our case it can’t fulfil the requirements: A common SQL UUID generator function like generate_uuid() does not exist and probably won’t ever be standardized in SQL. So we need two separate data.sql files, one for each database management system. In addition to that, we still don’t have access to the OSSP module for generating a UUID in PostgreSQL.

Inserting data programtically

Why not using a simple ApplicationListener to generate the roles during the startup of the Spring framework?

public class InsertRoleStamdata implements ApplicationListener<ApplicationReadyEvent> {
	private final RoleRepository roleRepository;

	public void onApplicationEvent(ApplicationReadyEvent event) {
		if (roleRepository.count() > 0) {
		} Role("ADMIN", java.util.UUID.randomUUID()));

This does obviously work and is executed on every application’s startup. With the if condition, we ensure that we only insert a role if there is no role present yet.
But what happens if the role ADMIN has to be renamed to ADMINISTRATOR? If you think about it, the code above can rapidly change into some ugly monster with various condition checkings and edge cases. In the case you want to refactor it to split a migration into different classes, you have to retain the order of the executed listener and so on.
And besides of this, we need some traceable versionining.

Using a schema migration tool

For obvious reasons, a schema migration tool like Liquibase or Flyway should be the way to go. But how can it fulfil our requirements?

In Liquibase we can define a changeset which uses the insert tag:

    <changeSet author="schakko" id="role-stamdata">
        <insert tableName="role">
            <column name="uuid" value="${random_uuid_function}"/>
            <column name="name" value="ADMIN"/>

This is fine, but as already mentioned:

Neither Flyway nor Liquibase are able to interpolate a variable placeholder (like ${random_uuid_function}) with a function callback defined in Java.

Using a schema migration tool programatically

Fortunately, Flyway and Liquibase both support programatically defined changesets: You can write Java code which executes the SQL statement. In Liquibase you have to use the customChange tag. The following code snippet describes the required definition in YAML:

     - changeSet:
         id: create-default-roles
         author: schakko
             - customChange:
                 class: de.schakko.sample.changeset.DefaultRoles20171107

The class de.schakko.sample.changeset.DefaultRoles20171107 must implement the interface CustomTaskChange:

public class DefaultRoles20171107 implements CustomTaskChange {

	public String getConfirmationMessage() {
		return null;

	public void setUp() throws SetupException {

	public void setFileOpener(ResourceAccessor resourceAccessor) {

	public ValidationErrors validate(Database database) {
		return null;

	public void execute(Database database) throws CustomChangeException {
		JdbcTemplate jdbcTemplate = new JdbcTemplate(new SingleConnectionDataSource(((JdbcConnection)database.getConnection()).getUnderlyingConnection(), false));
		jdbcTemplate.update("INSERT INTO role (uuid, name) VALUES(?, ?,)", new Object[] { java.util.UUID.randomUUID(), "ADMIN" });


Liquibase’s Spring Boot auto-configuration is executed in an early stage in which Hibernate is not loaded. Because of this we can’t inject any Spring Data JPA repositories by default. Even accessing the Spring context is not so easy. You need to provide the application context through a static attribute and so on.
With Flyway the Spring integration is much better.


This blog post demonstrated how initial data can be inserted into a Spring Boot application’s database. In addition to that we discussed how this data can be versionized in a database-independent manner.

Website moved to new Uberspace with HTTPS

After migrating my domain to Route 53 I finally transferred my website to a new Uberspace host which supports Let’s Encrypt. You should be automatically redirect to HTTPS when visiting
The whole procedure took 2 hours, including setting up the new Uberspace, importing the existing databases and changing the DNS records. Most of this was straight forward as the Uberspace team has a really good documentation for this.

BTW: Route 53 sets the TTL for each DNS record to 300 seconds by default. In most cases, 1 day should be sufficient. More DNS calls means more to pay.

Fixing periodically occurring WiFi lags when running Claymore’s Ethereum miner

This is a blog post which literally drove me crazy for a week. After building our mining rig I experienced a bad WiFi connection with high pings, periodically occuring every 30 seconds.
Just scroll down to see my – fairly simple – solution.

Getting into the mining business

A few weeks ago some of my co-workers and I decided to build a simple mining rig to make some Ethereum tokens. The current exchange rate for Ethereum fell down the last days but it is like it is. Anyhow, we bought 12 Nvidia GTX 1070, 12 riser cards, 2 mainboards, 4 PSUs with 600 W each and a wattmeter. We assembled everything into an open metal cabinet, put an access point (DD-WRT firmware, Linksys) on it and connected the mainboards with the access point.
I have to say that the mining rig itself is located in one of our flats in my study room. The access point on top of the cabinet acts as a wireless bridge to our other flat. Both mainboards and my workstation are connected to the access point are connected with Ethernet cables. The other flat contains an additional access point with a cable modem and internet connectivity. Nothing fancy.
We switched from ethminer to Claymore’s Ethereum Dual miner due to some problems handling multiple cards and wallets. In the end the rigs worked like a charme.

Experiencing lags in Overwatch

Two days later I wanted to play an Overwatch match on my workstation, also located in my study room. The ping was unstable and a simple ping command shows that I had random timeouts and the ping spiked every 30 seconds from 20ms to &gt 1500ms for a few seconds. This has not happened before the mining rigs were active.

“This must be a software problem of Claymore’s miners”

My first guess was that is has to be a software problem of Claymore’s miner. One of my co-miners tested a single mainboard with one GPU before at his home and everything worked flawlessly. I started to analyze the problem:

  • Killed each claymore miner process on rig1 and rig2: no lag occurred
  • Started a single claymore miner process: lag occurred every 30 seconds with > 600ms when receiving the first Ethereum share. This indicated a problem of the network implementation of Claymore’s miner or some high bandwidth usage. I checked the bandwidth but one claymore miner instance just requires 12 kBit/s.
  • Started tcpdump on rig1 to identify any conspiciuous network activity or packets. Neither UDP nor TCP traffic were eye-catching. I could only relate the receivement of Ethereum shares with latency spikes. The used network bandwidth was still low.

“This must be a network problem with Claymore’s miner”

The last application I had slightly similiar problems was Subversion. 10 years ago SVN sometimes failed to commit data. It turned out that Tortoise SVN struggled with special packets, the MTU size of our company network and the MTU size of our ADSL connection. Because of this, I changed the MTU size of the rig running the single claymore process. It did not influence anything.

Before I tried something else I disabled the network-related services firewalld and chronyd – without success. stracing the miner did also not show anything special.

“This must be a problem with Ethereum protocol and DD-WRT”

Some interesting observation I did was that the ping between rig -> ap2 (bridge) -> ap1 (router) &gt internet and workstation -> ap2 (bridge) -> ap1 (router) &gt internet were both bad but pinging directly from the main access point ap1 (router) -&gt internet showed no problem. What the hell?
I suspected some TCP settings on ap2 (bridge) led to this hickups. Luckily I could check the network settings and stats of both access points (bridge and router) as they are running on DD-WRT. As you can imagine: there were no suspicious network stat (TCP/UDP) changes when a spike occurred.

Could this be a hardware problem?

As I could not see any problem in the software or on the network layer (>= L2), there could only be a generic hardware problem or some L1 error.
During my TCP stats investigation on the access points, I noticed was that the WiFi rate of the bridge (ap2) were unstable and had heavy fluctuations. This were highly unusal as it has not happened before the building of the rigs.
To exclude any directly network related problems I did the simplest possible action: I pulled the Ethernet cables of both rigs (running one active miner process each) so they were no longer connected to the access point. To my suprise I had still network lags. WTF?
After killing both miner processes the network lags went away. This had to be obviously a problem with the GPU load the mining process creates.

To give you some insight: Due to some DD-WRT restrictions the bridge between both access points uses 2.4 GHz and not 5 GHz. Could this be that some interference on the wireless layer?
After googling for “gpu” and “spike” some links catched my eyes:

After reading both posts

  • I changed the WiFi channel from 1 to 11
  • I removed the DVI cable from a TFT connected to one rig
  • I removed the USB keyboard connected to one rig

Nothing changed. This was likely the point I wanted to give up. The last thing to test was using another power connection. The ap2 and all 4 PSUs of the rig were connected to the same connector (psu1,psu2,psu3,psu4)->wattmeter->wall socket. Maybe it could be some spikes in the voltage when the GPU has load, leading to a confused access point hardware?

Changing the wall socket

I had no free wall socket available behind the cabinet containing both rigs. So I put the access point from the top of the rig to the floor and moved it some centimeters in the direction of the other wall. After the access point had power and were connected to ap1 (router) again, the network spikes lowered from 1600 ms to 800 ms. Uhm? I again moved ap1 20 centimeters away from the cabinet. Spikes went down to 400ms.

The revelation

In a distance of 1.50 meter between rig and access point no more spikes occurred. I counterchecked if the the different wall socket was the solution. But switching from one wall socket to the wattmeter-connected connector made no difference.
So simple. By just moving the access point away. This whole thing drove me crazy for atleast 5 afternoons. I felt so stupid.

The high load of the GPU when running the Ethereum mining process produces either a signal at 2.4 GHz (which is more unlikely) or a harmony around 1.2 GHz (which is more likely). I assume that the spike every 30 seconds occur when both rigs receive the same mining job at almost the same time and start the mining. If anybody has more information, just let me know. I am heavily interested in the technical explaination for this.

Transferring DNS from Uberspace to AWS Route 53

Vacation time means administration time. I am one of these Uberspace customers whose domain has been registered and managed not by an external DNS registrar but by Uberspace (or Jonas Pasche) itself. Since a few years Uberspace has not been providing this service. Actually this was not a problem and everything worked fine. The reason why I had to deal with it was, that I originally wanted to enable Let’s Encrypt for – for obvious reasons. My space is still hosted on a older Uberspace server running CentOS 5, not having Let’s Encrypt integration. To use LE I had to move to a newer Uberspace server and just point my DNS records to the new host’s IPv4/IPv6.
This was the point were I thought about asking the Ubernauten to just change the DNS registration and everything would have been good. But to be honest I did not want the Ubernauten to follow some no longer supported procedures. I am still a developer and do exactly know how upsetting this can be. So I thought about alternatives and decided to go with AWS Route 53. This is by all means not the cheapest solution but for my future private projects I am planning to use AWS so this did fit best.

Prepararing the current DNS entries

Route 53 requires that the contact information for the domain registrant (= domain owner or “Domaininhaber” in denic-Sprech) must contain a valid e-mail address. This address is later used for the verification of the domain ownership. As I could not edit the information I asked the Uberspace admins to change the e-mail address. Just some information you might find useful:

  1. Contact information can be hidden to protect the privacy of the owner. This includes the e-mail address of the domain registrant.
  2.’s whois service does not show that these information is hidden because of privacy protection.
  3.’s field last update (“Letzte Aktualisierung” in German) does not get updated when the e-mail address is updated.

The two last bullet points highly irritated me as I thought nothing had changed. Nevertheless I started the domain transfer after having waited for two days.

Setting up the hosted zone

In AWS’ Route 53 administration panel you need to go to Hosted zones and click Create Hosted Zone. The following record sets have to be created:

Name Type Value Description
empty A IPv4 address of your host see Uberspace datasheet
empty AAAA IPv6 address of your host see Uberspace datasheet
empty MX 0 xyz is your current Uberspace host. Do not forget the leading 0!
www A IPv4 address of your host see Uberspace datasheet
www AAAA IPv6 address see Uberspace datasheet
Required record sets

Required record sets

Transferring the domain from Uberspace to Route 53

  • Log in into your AWS account and select Route 53
  • Go to Registered Domain > Transfer Domain
  • After having entered the domain name and selected the TLD you have to provide the Authorization code. This has been entered by the Uberspace guys in my ~/authcode file.
    For the name server options you can either select Continue to use name servers provided by the current registrar or DNS service or Import name servers from a Route 53 hosted zone that has the same name as the domain. I mistakenly used the first option (see below), you should go with the second option. Route 53 replaces the current NS entries with the previously created zone.

    Authorization code and name servers

    Authorization code and name servers

  • After clicking on Continue you have provide your contact information and make sure the checkbox Hide contact information if the TLD registry, and the registrar, allow it is checked.

A few minutes after you have purchased the domain, you will receive two e-mails:

  • Final transfer status from
  • Transferring to Route 53 succeeded from AWS

All in all it took no longer than 10 minutes. Fun fact: I did not receive an e-mail with a verification link. Providing the authcode seems to be sufficient.

Changing the nameservers in Route 53

As I have already written, I mistakenly let the nameservers of my domain pointed to To change the NS entries you just have to to got Registered domains > $YOUR_DOMAIN > Add or edit name servers.
Replace the entries with the NS entries from your hosted zone ( etc).

Update current nameservers

Update current nameservers

Please note that updating the NS entries takes some time. The TTL for has been set to 3600 seconds so I had to wait around 1 hour that all my changes have been propagated.

Running multiple Claymore miner instances with different wallets and GPUs

A few days ago I switched from ethminer to Claymore’s Dual Ethereum Miner because ethminer has problems running multiple instances with multiple GPUs. My blog post How to run same ethminer instance with multiple GPUs is still valid but ethminer simply can’t handle two or more parallel running instances.

In addition to my previous blog post I want to show you how to mine with multiple GPUs into different Ethereum wallets. The following has been tested with Fedora 25.

Add a configuration file for each of your wallets

For each of your wallets create configuration a file named /etc/sysconfig/claymore-miner-$WALLETNAME with this content:

## GPUs must *not* be separated by comma or whitespace!
## Use the first three GPU cards on your mainboard

Create a systemd service template

Create the file /etc/systemd/system/claymore-miner@.service and paste the configuration into it:

Description=Ethereum miner for %i

ExecStart=/bin/bash --login -c "/opt/claymore-miner/ethdcrminer64 -epool -ewal ${ETHER_ADDRESS}/$YOUR_RIG_NAME/${EMAIL_ADDRESS} -nofee 0 -mport 0 -espw x -ftime 10 -di ${GPUS} -mode 1"
# -mode 0 -dcoin sia -dpool -dwal ${SIA_ADDRESS}/$YOUR_RIG_NAME/${EMAIL_ADDRESS}"
## If you want to dual-mine Siacoin, uncomment the line above and remove '-mode 1"' in the line before

  • Replace $YOUR_RIG_NAME with the name of your rig, whitespaces are not allowed
  • Uncomment the -mode 0 line to enable dual mining mode

Enable the service

We have delayed the start of each miner after booting by adding a simple crontab entry:

@reboot sleep 60s; systemctl start claymore-miner@$WALLETNAME1; systemctl start claymore-miner@$WALLETNAME2

If you like it, and want say thank you, you can drop me some wei at 0x4c1856c9021db812f0b73785081b245f622d58ec 🙂

How to pass multiple parameters to systemd’s ExecStart – Running same ethminer instance with multiple GPUs with systemd

For our Ethereum mining rig a coworker of mine wrote a systemd template unit so it is relatively easy to configure which graphic card in the rig is assigned to whom.

For each of the GPU owners exist a custom configuration file /etc/sysconfig/ethminer-$USERNAME (/etc/sysconfig/ethminer-ckl in my case). The file contains the following parameters:

EMAIL_ADDRESS=<your email address>
# use first three GPUs, last three would be 3 4 5

Each user has its own configuration file and all services can be started like

systemctl start ethminer@ckl

But when ethminer was started by using systemctl start only the first GPU in the defintion was used – GPU 0 in the configuration sample above. systemd itself called the ethminer binary in a correct way and the same command line worked when executed by hand. The problem occurred by how systemd passes arguments and how ethminer reads the them. In the end I fixed it by wrapping the ethminer command in a sub-bash process. Our unit definition in /etc/sysconfig/ethminer@.service looked like this:

Description=Mine Ether for %i

# ExecStart=/usr/bin/ethminer --farm-recheck 2000 -G -S -O ${ETHER_ADDRESS}/rig02/${EMAIL_ADDRESS} --opencl-devices ${OPENCL_DEVICES}

ExecStart=/bin/bash --login -c "/usr/bin/ethminer --farm-recheck 2000 -G -S -O ${ETHER_ADDRESS}/rig02/${EMAIL_ADDRESS} --opencl-devices ${OPENCL_DEVICES}"