Author

794 posts

Deploying with SSH using GitHub Actions

Shortly after I had started the work on nerdhood.de I built a deployment pipeline. The build script (.sh) for my Laravel application was easy but triggering the deployment itself turned out to be more difficult than expected. In the end I something built with two AWS Lambda function, SNS, an S3 bucket for a private key and using the serverless framework. But this is another story.

Before I built the deployment pipeline I had signed up for GitHub Actions. Yesterday I got confirmed and today I replaced my AWS pipeline with a few lines of YAML.

First of all, Action’s HCL syntax will be deprecated with end of September 2019. Most of the available examples are written in HCL. Porting from HCL to YAML is straight forward.

The second important thing is, that you have to enter your secrets, e.g. my SSH deployment key, in your GitHub’s project Settings > Secrets. If you secret is named SSH_DEPLOYMENT_KEY you can reference its content later by using ${{ secrets.SSH_DEPLOYMENT_KEY }}.

I ended up with these YAML to trigger my shell script with SSH:

name: CI
on: [push]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Deploy to nerdhood.de
      uses: appleboy/ssh-action@master
      env:
        USERNAME: my_username
        HOST: my_host
        SCRIPT: ~/deployment.sh
        KEY: ${{ secrets.SSH_DEPLOYMENT_KEY }}

Super Micro X10 hangs with “PEI – Intel Reference Code Execution (A9)”

For a customer of us we had to set up two webserver environment on physical servers. We picked up both server systems (having a Super Micro X10DRI-LN4+ and a PNY Quadro P1000 installed in addition to other components) and booted up the system. During the IPMI initalization phase, the whole process hanged with PEI – Intel Reference Code Execution … (A9). The status code 0xA9 itself does only mean that the setup started.

We waited more than an hour but nothing happened, so we tried everything suggested:

  1. Doing multiple power cycles.
  2. Popping out every SSD to make sure that this is not related due to some HDD hardware failures.
  3. Doing a BIOS recovery as described at various places – which did not work as our USB stick and the Ctrl+Home shortcut had no effect.

In the end – and after a lot of cursing – we phoned with the hardware distributor of the systems. He gave us the hint to attach a second monitor to the PNY Quadro in addition to the monitor connected to the onboard graphics adapter.

Instantly after we had attached the monitor to the PNY quadro, the A9 status went away and we were able to enter the BIOS. It turned out that most of the Super Micro mainboards are recognizing if a second graphics adapter is installed and will just hang until a display is attached. After we were able to access the BIOS we switched the primary display mode from Offboard to Onboard and everything worked.

I felt very stupid.

WooCommerce für den Verkauf von B2B-Software konfigurieren

Most of my articles are written in English but this topic is more or less focussed on German businesses so I’ll do this write-up in German.

Für das WordPress-Plug-in Next Active Directory Integration bieten wir als virtuelle Dienstleistung eine Support-Lizenz in diversen Stufen an. Die Bezahlung der Dienstleistung erfolgt dabei über PayPal, im Backend erfolgt die Verwaltung und der Verkauf über WooCommerce.

Rechtliche Bestimmungen und Regeln

Am einfachsten ist hier der Gang zum Steuerbüro, die für uns zuständige IHK war diesbezüglich wenig hilfreich. Die folgenden Infos kommen direkt vom Steuerbüro:

  1. Der Kunde benötigt die Rechnung in einem Format wie z. B. PDF. Dieses muss die allgemeinen Unternehmensdaten (Anschrift, Bankverbindung und Umsatzsteuer-Identifikationsnummer) enthalten.
  2. Beim Kauf von Software durch Unternehmen mit Sitz in der EU braucht keine 19% USt ausgewiesen werden. Stattdessen muss der Hinweis “reverse charge” auf der Rechnung hinterlegt sein.
  3. Für Verkäufe an Privatpersonen innerhalb der EU (was für uns nicht zutrifft), muss bis € 10.000,00 die 19% USt ausgewiesen werden. Alles darüber bedingt eine Registrierung im Ausland.
  4. Für Verkäufe außerhalb der EU (z. B. USA, Kanada, Australien) wird keine Umsatzsteuer ausgewiesen.

Die wichtigste Tatsache ist tatsächlich, dass der Verkauf von digitalen Produkten oder Dienstleistungen im B2B-Bereich nicht mit der Umsatzsteuer ausgewiesen werden muss.

Plug-ins

Das wichtigste WordPress-Plug-in für einen deutschen WooCommerce-Shop ist mit Sicherheit Germanized for WooCommerce. Es nimmt einen einen Großteil der rechtlichen Anforderungen ab.

Konfiguration

Rechnungen als PDF

Mit Hilfe von Germanized kann unter WooCommerce > Settings > Germanized > Invoices & Packaging Slips die PDF-Rechnung definiert werden. Der Menüpunkt PDF von Germanized ist tatsächlich nicht für die PDF-Rechnung sondern für AGBs usw. zuständig.

Sobald WooCommerce eine Rechnung generiert, wird unser Firmen-Briefpapier als Vorlage genommen. Zusätzliche Texte werde mit unserer Firmenschriftart gerendert:

Da wir tatsächlich nur an Unternehmen verkaufen, schreiben wir die Information zur Reverse Charge-Regel direkt am Ende der generierten PDF. Zwar existiert auch der Shortcode render_reverse, diesen haben wir aber bewusst weggelassen.

Rechnungen an E-Mails anhängen

In unserem Workflow haben wir unter WooCommerce > Settings > Emails > Customer invoice aktiviert. Die versendete Rechnungen erhalten wir ebenfalls, so dass das Steuerbüro dieses für die Buchhaltung nutzen kann:

Außerdem muss unter WooCommerce > Settings > Germanized > General Options > Invoices die Option Enable PDF invoices aktiviert werden – ansonsten hat die E-Mail-Konfiguration keine Auswirkung.

VAT-Konfiguration

Um sicherzustellen, dass innerhalb der EU-Mitgliedsstaaten die Software kaufen, ist der Shop so konfiguriert, dass die VAT ID kontrolliert wird. Die VAT ist letztendlich nichts Anderes als die Umsatzsteuernummer des Unternehmens. In Deutschland ist die VAT ID die Unternehmenssteuer-ID, in Österreich beispielsweise die AUT. In Germanzied wird die VAT-relevante Konfiguration unter WooCommerce > Settings > Germanized > General Options vorgenommen.

Mit Hilfe der Option VAT Check wird über einen Webservice der EU die VAT-ID von EU-ansässigen Kunden überprüft. Kunden außerhalb der EU bekommen dieses Feld nicht zu sehen, es wird anhand der Landes des Unternehmens angezeigt bzw. versteckt. Falls der VAT Check aktiviert ist, kann die VAT angegeben werden.
Hier ist außerdem wichtig, dass auf Seiten des Servers das PHP-Modul php-soap aktiviert ist. Ansonsten erscheint beim Bezahlvorgang ein Spinner über den gekauften Produkten und der Request schlägt mit SoapClient is required to enable VAT validation fehl.

Da wir nur an B2B-Kunden verkaufen, nutzen wir die Option VAT ID field shall be mandatory. Damit muss die VAT für EU-ansässige Kunden angegeben werden. Ob wir dies tatsächlich so beibehalten ist zu sehen, da wir damit Länder und Kommunen vom Kauf ausschließen.

Falls ein WooCommerce-Shop digitale und nicht-digitale Produkte verkauft, muss die Option Virtual Products B2B ausgewählt werden. Nicht-digitale Produkte werden mit Steuer ausgewiesen. Wir brauchen dies nicht.

Texte

  1. Wir weisen daraufhin, dass wir nur B2B-Geschäfte innerhalb der EU-Mitgliedsstaaten tätigen.
  2. Für den Verkauf in die USA informieren wir darüber, dass wir das Formular W-8BEN-E auf Anfrage bereitstellen.

Verkauf an US-Regierungseinrichtungen & Formular W8 / W-8BEN-E

US-Regierungseinrichtungen benötigen das Formular W-8-Formular für die Angabe der Steuer. Ob und wie das auch nicht-Regierungsbehörden benötigt wird, kann ich nicht sagen.

Für unseren konkreten Fall wird das Formular W-8BEN-E benötigt, da wir eine GmbH sind. Das Ausfüllen des Formulars ist an sich einfach:

Part I

  1. Name des Unternehmens
  2. Unternehmenssitz (Germany)
  3. Leer lassen
  4. Chapter 3 Status ist in unserem Fall Corporation
  5. Chapter 4 Status ist in unserem Fall Active NFFE – was für Non-Financial Foreign Entity steht
  6. Anschrift
  7. In unserem Fall leer lassen
  8. Leer lassen, da wir keine TIN besitzen
  9. b) wählen und die USt-ID des Unternehmens eintragen

Part II

11. Chapter 4 Status ist in unserem Fall Branch treated as nonparticipating FFI
12. Adresse der Bank
13. Leer lassen

Part XXX

Name und Unterschrift des Geschäftsführers drunter und Stempel drauf.

Using Atlassian OpsGenie with a localized on-premises Jira instance

We are currently in the process of migrating our alerting infrastructure from OMD to Atlassian’s OpsGenie. Most of the features (SMS, phone call etc.) worked out of the box but we struggled with pushing alerts back into our on-premises Jira instance.

Enable logging of POST requests

OpsGenie does not provide debug logs of all executed HTTP requests against Jira’s REST API. Instead, only the very generic HTTP status code is provided like

Lucky for us, our Jira instance is running behind an Apache HTTPD webserver acting as a proxy. With help of the mod_security module we were able to trace the communication between OpsGenie and our Jira instance:

LoadModule security2_module modules/mod_security2.so

<VirtualHost *:443>
# ...
    <IfModule mod_security2.c>
       SecRuleEngine On
       SecAuditEngine On
       SecAuditLog /var/log/httpd/modsec_audit.log
       SecRequestBodyAccess on
       SecAuditLogParts ABIJDFHZ
    </IfModule>
</VirtualHost>

Configuring the Jira worfklow schema

OpsGenie requires you to have atleast a workflow with the following status transitions:

  • TODO/Open -> In Progress
  • In Progress -> Resolved

It is important, that

  1. the statusses are named exactly as “Resolved” and “In Progress” as OpsGenie’s internal Jira connector is case-sensitive
  2. you are not confusing the status name with the status category

In English-based Jira installations, this should not be an issue but in our localized German environment, we had to add both statusses to Vorgänge > Status and add its English translation to the status:

To check the correct names, you can access the REST API of your Jira instance like https://jira/rest/api/2/issue/${OPSGENIE_PROJECT}/${ISSUE_ID}/transitions. The transitions[].to.name field inside the JSON response must match the statusses above, like

{
	"expand":"transitions",
	"transitions":[
		{"id":"41","name":"Öffnen",
			"to":{
				"self":"https://jira/rest/api/2/status/10617",
				"description":"Der Vorgang wird aktuell nicht bearbeitet und wurde noch nicht vollständig fertig gestellt.",
				"iconUrl":"https://jira/images/icons/statuses/open.png",
				"name":"Offen","id":"10617",
				"statusCategory":{"self":"https://jira/rest/api/2/statuscategory/2","id":2,"key":"new","colorName":"blue-gray","name":"Aufgaben"}
			}
		},
		{"id":"61","name":"Resolve",
			"to":{
				"self":"https://jira/rest/api/2/status/5",
				"description":"Resolved",
				"iconUrl":"https://jira/images/icons/statuses/resolved.png",
				"name":"Resolved",
				"id":"5",
				"statusCategory":{"self":"https://jira/rest/api/2/statuscategory/3","id":3,"key":"done","colorName":"green","name":"Fertig"}
			}
		}
	]
}

After we had configured the workflow schema, OpsGenie was able to create issues and transitions them to the In Progress status.

Configuring the screen mask for solving open alerts

When we tried to close an open alert in OpsGenie, Jira failed with the HTTP 400 error described above. In our mod_security logs we saw the following output:

POST /rest/api/2/issue/${ISSUE_KEY}/transitions HTTP/1.1
Accept: text/plain, application/json, application/*+json, */*
Content-Type: application/json;charset=UTF-8
....
Accept-Encoding: gzip,deflate

--8ddfb330-C--
{"transition":{"id":"61"},"fields":{"resolution":{"name":"Done"}}}
--8ddfb330-F--
HTTP/1.1 400 Bad Request

The transition.id 61 pointed to the transition from In Progress to Resolved but its screen mask was obviously missing the “resolution” field. You can easily check the fields for a transition by accessing the issues’s transition configuration: https://jira/rest/api/2/issue/${ISSUE_KEY}/transitions?61&expand=transitions.fields.
We added the missing Lösung (Solution) field to the screen mask of the transition but the error still occurred.

Translating the “Solution” field

Again, the solution fields have to be translated so that is called “Done” and not “Fertig”. You can change the translations at https://jira/secure/admin/ViewTranslations!default.jspa?issueConstantType=resolution.

In the end, everything is working and OpsGenie is now able to create issues and move them through the expected statusses/transitions.

Receiving “com.amazonaws.services.s3.model.AmazonS3Exception: Not Found” when using Jenkins’ pipeline-aws-plugin and s3Upload step with Minio

I am currently working on a Jenkins declarative pipeline to connect the Jenkins builds with Kubernetes, Helm and Netflix Spinnaker. One of TODOs has been to deploy different artifacts (e.g. a helm chart my-chart-0.0.1.tar.gz) to an AWS S3-compatible bucket inside a Minio installation with help of pipeline-aws-plugin.

When running

withAWS(endpointUrl: 'https://minio.domain.tld', credentials: config.credentialsId) {
	s3Upload(file: "my-file.txt", bucket: "my-bucket")				
}

my pipeline always threw an exception with

com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1695)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1350)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1101)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:758)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:732)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:714)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:674)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:656)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:520)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4705)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4652)

Trying other clients with Minio

At first I suspected some misconfiguration of my Minio installation. I checked the S3 upload with mc and AWS’ own CLI. Both worked flawlessly so it had to be something else.

Enable logging

To get some more debugging output, I configured Jenkins’ to log events for com.amazonaws and org.apache.http.wire. The debugging output does not show up inside the build job’s console output but under the configured logger.

Host-style access to S3 buckets

After scanning the debug output, I noticed the following:

http-outgoing-11 >> "PUT /my-file.txt HTTP/1.1[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 >> "Host: my-bucket.minio.domain.tld[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 >> "x-amz-content-sha256: UNSIGNED-PAYLOAD[\r][\n]"
...
http-outgoing-11 << "[\r][\n]"
Jan 21, 2019 9:36:15 PM FINE org.apache.http.impl.conn.Wire wire
http-outgoing-11 << "default backend - 404"
Jan 21, 2019 9:36:15 PM FINE com.amazonaws.services.s3.internal.S3ErrorResponseHandler createException
Failed in parsing the error response : default backend - 404

When pipeline-aws-plugin initiates a request to my bucket, it does not https://minio.domain.tld/my-bucket but https://my-bucket.minio.domain.tld. This is totally fine for AWS S3 buckets. But with the Minio deployment in our Kubernetes cluster, this does not work out of the box:

  1. By default, our Minio deployment does not use the –address parameter described in https://github.com/minio/minio/issues/4681
  2. Our Minio ingress does also not listen to 4th level domains like my-bucket.minio.domain.tld so that the nginx proxy returns the “default backend – 404” string seen in the log output above.

Solving the issue

Instead of configuring host-style access I fixed it by simply using the pathStyleAccessEnabled: true in my s3Upload step. When enabled, pipeline-aws-plugin does not use the bucket name as 4th level subdomain but appends the bucket name to the host name:

withAWS(endpointUrl: 'https://minio.domain.tld', credentials: config.credentialsId) {
	s3Upload(pathStyleAccessEnabled: true, file: "my-file.txt", bucket: "my-bucket")				
}

Running a Spring Boot JAR service with SELinux enabled

Just a quick reminder how to run a Spring Boot JAR (or any other self JAR executable) with SELinux enabled:

chcon --type=java_exec_t /opt/myapp/spring-boot-app.jar

To make this persistent you have to use the bin_exec_t type as java_exec_t is just an alias:

# apply the bin_exec_t
semanage fcontext -a -t bin_exec_t /opt/myapp/spring-boot-app.jar
# restore SELinux contexts
restorecon -R /opt/myapp

ll -Z /opt/myapp
# should look like
# -rwxr-xr-x. 1 myapp myapp unconfined_u:object_r:bin_t:s0 26500195 Aug 28 08:34 myapp.jar

To let systemd start this service, you have to create a systemd unit file at /etc/systemd/system/myapp.service:

[Unit]
Description=My Spring Boot application
After=syslog.target network.target

[Service]
ExecStart=/opt/myapp/spring-boot-app.jar
EnvironmentFile=-/etc/sysconfig/myapp
SuccessExitStatus=143
User=pwss

[Install]
WantedBy=multi-user.target

And don’t forget to add the service user, reload the systemd services and enable the myapp.service.

Using IPv6 with AWS Application Load Balancer (ALB)

Today I struggled an hour or so to access a AWS hosted web application through IPv6. Just follow these rules:

  • Get an IPv6 CIDR for your VPC: Go to VPC > Your VPCs > ${YOUR_VPC} > Edit CIDRs > Add IPv6 CIDR. The IPv6 CIDR is automatically choosen by AWS. You can’t configure the IPv6 CIDR on your own.
  • For the subnet(s) your ALB is located in, you have to allocate an IPv6 subnet from your previously generated IPv6 subnet. Go to VPC > Subnets > ${YOUR_ALB_SUBNETS} > Edit IPv6 CIDRs > Add IPv6 CIDR. You can have 255 IPv6 subnets.
  • You have to add any IPv6 destination to your routing table. In VPC > Route Tables > ${YOUR_ROUTING_TABLE} > Routes > Edit add “Destination=::/0” and “Target=${YOUR_IGW_ID}” as routing table entry. This was, BTW, the part I had forgotten.
  • Enable dualstack for your ALB. Go to EC2 > Load Balancers > ${YOUR_APPLICATION_LOAD_BALANCER} > Edit IP address type and select dualstack. The option is only available if your subnets have been previously configured with IPv6 CIDRs.
  • Your load balancer’s security group must allow HTTP and/or HTTPS traffic through IPv6. Go to EC2 > Security Groups > ${YOUR_APPLICATION_LOAD_BALANCERS_SECURITY_GROUP} and add the inbound and outbound rules “Protocol=TCP,Port Range=80, Source=::/0” and/or “Protocol=TCP,Port Range=443, Source|Destination=::/0”.

As soon as you have enabled dualstack mode, for the ALB, AWS propagates a new AAAA DNS record for the load balancer. This takes a few minutes. You can access the load balancer with IPv6 by using the load balancer’s IPv4 CNAME. The load balancer itself forwards HTTP requests to the backend servers over IPv4. EC2 instances do not need an IPv4 or IPv6 address on their own.

How to programtically insert versionized initial data into Spring Boot applications

One of the common required tasks for an application using a persistence store is to initialize the underlying database with basic data sets. Most of the time this contains something like admin users or default roles.

Setting the stage

To give a proper example, we have the database table role with two columns id (primary key) as an internal ID and uuid (primary key) as an external key.
In Liquibase, our changeset for this table has the following definition:

	<changeSet author="schakko" id="schema-core">
		<createTable tableName="role">
			<column name="id" type="BIGSERIAL" autoIncrement="true">
				<constraints nullable="false" primaryKey="true" unique="true"
					uniqueConstraintName="unq_role_id" />
			</column>
			<column name="uuid" type="UUID">
				<constraints nullable="false" primaryKey="true" unique="true"
					uniqueConstraintName="unq_role_uuid" />
			</column>
			<column name="name" type="varchar(255)">
				<constraints nullable="false" unique="true" />
			</column>
		</createTable>
	</changeSet>

My requirements are:

  • I want to add multiple custom roles into this table
  • The uuid field must be randomly generated
  • The schema definition must work on H2 and PostgreSQL without the uuid-ossp module. Our application backend is responsible for the generation of UUIDs.

Initializing databases with Spring Boot’s native features

With Java, specifically Spring Boot, there are two ways to initialize the database:

  1. Hibernate, and therefore Spring Boot with JPA, checks for a file named import.sql in the root of the classpath. This file is executed on startup when Hibernate creates the schema.
  2. The file data.sql, respectively data-${platform}.sql for concrete DBMS’, are used for importing SQL data by using the pure JDBC datasource without using any JPA stuff.

For simple tasks, both options are feasible. But in our case it can’t fulfil the requirements: A common SQL UUID generator function like generate_uuid() does not exist and probably won’t ever be standardized in SQL. So we need two separate data.sql files, one for each database management system. In addition to that, we still don’t have access to the OSSP module for generating a UUID in PostgreSQL.

Inserting data programtically

Why not using a simple ApplicationListener to generate the roles during the startup of the Spring framework?

@RequiredArgsConstructor
@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class InsertRoleStamdata implements ApplicationListener<ApplicationReadyEvent> {
	@NonNull
	private final RoleRepository roleRepository;

	public void onApplicationEvent(ApplicationReadyEvent event) {
		if (roleRepository.count() > 0) {
			return;
		}

		roleRepository.save(new Role("ADMIN", java.util.UUID.randomUUID()));
	}
}

This does obviously work and is executed on every application’s startup. With the if condition, we ensure that we only insert a role if there is no role present yet.
But what happens if the role ADMIN has to be renamed to ADMINISTRATOR? If you think about it, the code above can rapidly change into some ugly monster with various condition checkings and edge cases. In the case you want to refactor it to split a migration into different classes, you have to retain the order of the executed listener and so on.
And besides of this, we need some traceable versionining.

Using a schema migration tool

For obvious reasons, a schema migration tool like Liquibase or Flyway should be the way to go. But how can it fulfil our requirements?

In Liquibase we can define a changeset which uses the insert tag:

    <changeSet author="schakko" id="role-stamdata">
        <insert tableName="role">
            <column name="uuid" value="${random_uuid_function}"/>
            <column name="name" value="ADMIN"/>
        </insert>
    </changeSet>

This is fine, but as already mentioned:

Neither Flyway nor Liquibase are able to interpolate a variable placeholder (like ${random_uuid_function}) with a function callback defined in Java.

Using a schema migration tool programatically

Fortunately, Flyway and Liquibase both support programatically defined changesets: You can write Java code which executes the SQL statement. In Liquibase you have to use the customChange tag. The following code snippet describes the required definition in YAML:

databaseChangeLog:
     - changeSet:
         id: create-default-roles
         author: schakko
         changes:
             - customChange:
                 class: de.schakko.sample.changeset.DefaultRoles20171107

The class de.schakko.sample.changeset.DefaultRoles20171107 must implement the interface CustomTaskChange:

public class DefaultRoles20171107 implements CustomTaskChange {

	@Override
	public String getConfirmationMessage() {
		return null;
	}

	@Override
	public void setUp() throws SetupException {
	}

	@Override
	public void setFileOpener(ResourceAccessor resourceAccessor) {
	}

	@Override
	public ValidationErrors validate(Database database) {
		return null;
	}

	@Override
	public void execute(Database database) throws CustomChangeException {
		JdbcTemplate jdbcTemplate = new JdbcTemplate(new SingleConnectionDataSource(((JdbcConnection)database.getConnection()).getUnderlyingConnection(), false));
		jdbcTemplate.update("INSERT INTO role (uuid, name) VALUES(?, ?,)", new Object[] { java.util.UUID.randomUUID(), "ADMIN" });
	}

}

Liquibase’s Spring Boot auto-configuration is executed in an early stage in which Hibernate is not loaded. Because of this we can’t inject any Spring Data JPA repositories by default. Even accessing the Spring context is not so easy. You need to provide the application context through a static attribute and so on.
With Flyway the Spring integration is much better.

Conclusion

This blog post demonstrated how initial data can be inserted into a Spring Boot application’s database. In addition to that we discussed how this data can be versionized in a database-independent manner.

Website moved to new Uberspace with HTTPS

After migrating my domain to Route 53 I finally transferred my website to a new Uberspace host which supports Let’s Encrypt. You should be automatically redirect to HTTPS when visiting www.schakko.de.
The whole procedure took 2 hours, including setting up the new Uberspace, importing the existing databases and changing the DNS records. Most of this was straight forward as the Uberspace team has a really good documentation for this.

BTW: Route 53 sets the TTL for each DNS record to 300 seconds by default. In most cases, 1 day should be sufficient. More DNS calls means more to pay.

Fixing periodically occurring WiFi lags when running Claymore’s Ethereum miner

This is a blog post which literally drove me crazy for a week. After building our mining rig I experienced a bad WiFi connection with high pings, periodically occuring every 30 seconds.
Just scroll down to see my – fairly simple – solution.

Getting into the mining business

A few weeks ago some of my co-workers and I decided to build a simple mining rig to make some Ethereum tokens. The current exchange rate for Ethereum fell down the last days but it is like it is. Anyhow, we bought 12 Nvidia GTX 1070, 12 riser cards, 2 mainboards, 4 PSUs with 600 W each and a wattmeter. We assembled everything into an open metal cabinet, put an access point (DD-WRT firmware, Linksys) on it and connected the mainboards with the access point.
I have to say that the mining rig itself is located in one of our flats in my study room. The access point on top of the cabinet acts as a wireless bridge to our other flat. Both mainboards and my workstation are connected to the access point are connected with Ethernet cables. The other flat contains an additional access point with a cable modem and internet connectivity. Nothing fancy.
We switched from ethminer to Claymore’s Ethereum Dual miner due to some problems handling multiple cards and wallets. In the end the rigs worked like a charme.

Experiencing lags in Overwatch

Two days later I wanted to play an Overwatch match on my workstation, also located in my study room. The ping was unstable and a simple ping command shows that I had random timeouts and the ping spiked every 30 seconds from 20ms to &gt 1500ms for a few seconds. This has not happened before the mining rigs were active.

“This must be a software problem of Claymore’s miners”

My first guess was that is has to be a software problem of Claymore’s miner. One of my co-miners tested a single mainboard with one GPU before at his home and everything worked flawlessly. I started to analyze the problem:

  • Killed each claymore miner process on rig1 and rig2: no lag occurred
  • Started a single claymore miner process: lag occurred every 30 seconds with > 600ms when receiving the first Ethereum share. This indicated a problem of the network implementation of Claymore’s miner or some high bandwidth usage. I checked the bandwidth but one claymore miner instance just requires 12 kBit/s.
  • Started tcpdump on rig1 to identify any conspiciuous network activity or packets. Neither UDP nor TCP traffic were eye-catching. I could only relate the receivement of Ethereum shares with latency spikes. The used network bandwidth was still low.

“This must be a network problem with Claymore’s miner”

The last application I had slightly similiar problems was Subversion. 10 years ago SVN sometimes failed to commit data. It turned out that Tortoise SVN struggled with special packets, the MTU size of our company network and the MTU size of our ADSL connection. Because of this, I changed the MTU size of the rig running the single claymore process. It did not influence anything.

Before I tried something else I disabled the network-related services firewalld and chronyd – without success. stracing the miner did also not show anything special.

“This must be a problem with Ethereum protocol and DD-WRT”

Some interesting observation I did was that the ping between rig -> ap2 (bridge) -> ap1 (router) &gt internet and workstation -> ap2 (bridge) -> ap1 (router) &gt internet were both bad but pinging directly from the main access point ap1 (router) -&gt internet showed no problem. What the hell?
I suspected some TCP settings on ap2 (bridge) led to this hickups. Luckily I could check the network settings and stats of both access points (bridge and router) as they are running on DD-WRT. As you can imagine: there were no suspicious network stat (TCP/UDP) changes when a spike occurred.

Could this be a hardware problem?

As I could not see any problem in the software or on the network layer (>= L2), there could only be a generic hardware problem or some L1 error.
During my TCP stats investigation on the access points, I noticed was that the WiFi rate of the bridge (ap2) were unstable and had heavy fluctuations. This were highly unusal as it has not happened before the building of the rigs.
To exclude any directly network related problems I did the simplest possible action: I pulled the Ethernet cables of both rigs (running one active miner process each) so they were no longer connected to the access point. To my suprise I had still network lags. WTF?
After killing both miner processes the network lags went away. This had to be obviously a problem with the GPU load the mining process creates.

To give you some insight: Due to some DD-WRT restrictions the bridge between both access points uses 2.4 GHz and not 5 GHz. Could this be that some interference on the wireless layer?
After googling for “gpu” and “spike” some links catched my eyes:

After reading both posts

  • I changed the WiFi channel from 1 to 11
  • I removed the DVI cable from a TFT connected to one rig
  • I removed the USB keyboard connected to one rig

Nothing changed. This was likely the point I wanted to give up. The last thing to test was using another power connection. The ap2 and all 4 PSUs of the rig were connected to the same connector (psu1,psu2,psu3,psu4)->wattmeter->wall socket. Maybe it could be some spikes in the voltage when the GPU has load, leading to a confused access point hardware?

Changing the wall socket

I had no free wall socket available behind the cabinet containing both rigs. So I put the access point from the top of the rig to the floor and moved it some centimeters in the direction of the other wall. After the access point had power and were connected to ap1 (router) again, the network spikes lowered from 1600 ms to 800 ms. Uhm? I again moved ap1 20 centimeters away from the cabinet. Spikes went down to 400ms.

The revelation

In a distance of 1.50 meter between rig and access point no more spikes occurred. I counterchecked if the the different wall socket was the solution. But switching from one wall socket to the wattmeter-connected connector made no difference.
So simple. By just moving the access point away. This whole thing drove me crazy for atleast 5 afternoons. I felt so stupid.

The high load of the GPU when running the Ethereum mining process produces either a signal at 2.4 GHz (which is more unlikely) or a harmony around 1.2 GHz (which is more likely). I assume that the spike every 30 seconds occur when both rigs receive the same mining job at almost the same time and start the mining. If anybody has more information, just let me know. I am heavily interested in the technical explaination for this.