Software engineering, problem solving, nerd stuff

Project idea: Nostradamus AKA prophetr – a social forecasting network for experts

From time to time some programming ideas come to my mind which I can not forget. I had often started a new project but due to my limited amount of free time it is hard to finish all of them. In the next blog posts I will describe my ideas with some technical and environmental background. If you are interested in getting more information, buying the idea/source code or just motivate me, just drop me a line at me[at]schakko[dot]de. I would really appreciate your feedback.

Making prophecies and verifying their occurence

It must been around five or six years ago when I first thought about the idea of making prophecies in technical topics. I remember that I had a talk during the lunch with one of my co-workers about some IT trending topic which I had been propagated a for a few months. During this talk I said that a platform would be awesome where users can enter their prophecies and verify prophecies of other users if they had occured and how exactly the prophecy matches with the occurence.

Due to the number of prophecies made and the number of verified prophecies you could calculate the relevancy of a prophet (= a person who makes prophecies) which indicates the expert level of the prophet. A higher number of verified prophets means that you have more expertise in a given topic than other users.

Possible customers for social forecasting networks

The first intention was to have a social network for selfish reasons. Through some mechanisms, like not being allowed to change a prophecy after someone has voted for it, you would have been pinned to one statement which could be falsified or verified. If you were right, you could always use the classical phrase: “I said so.”.

In larger companies you could identify hidden champions or motivate people to be more interested in their expert knowledge. One day I had a talk with my bank account manager who were highly interested in the project because of obvious reasons. The software would allow them to evaluate the efficency of share brokers without using real money.

Another possible target group were whistleblowers or persons who wanted to make sure that a prophecy would be published on a specific date. For this I implemented some functionality to encrypt the content of the prophecy with symmetric keys. The keys could be stored on remote servers so that only the prophet was in control of when the prophecy can be published. After Snowedens revelations I instantly thought about this feature again.

 

I have to admit that the project has one big flaw: making self-fulfilling prophecies like: “I prophecy that the share price of company VWXYZ will drop in the next few days.” If you are already an expert in your area, there is a high chance that other follower will react to this prophecy and sell their shares. The share price will drop and your prophecy could be verified… You get the idea.

Technical background

At first I started with Spring MVC but after some weeks I switched to PHP/Zend Framework 1.x/MySQL. Most of the statistical computation (relevancy of prophets, influence of prophets and so on) and the social network aspect (who follows whom, which prophecies I can see) is done through database views which made the implementation inside the services really easy.
The encryption part called remote-credential-loader (RCL) is written in Node.js. RCL polls every few minutes the deposited decryption key URLs for encrypted prophecies. To a given timestamp (e.g. five minutes before releasing the prophecy) the URL must provide the AES decryption key, otherwise the prophecy is evaluated as false.

For the frontend I used Twitter Bootstrap 2.

The whole background documentation (processes, data model, computation) I had written in LaTeX (German language only).

Current status of the project

After thinking about the idea for years I finished the beta within the scope of my Bachelor project in the year 2012. The professor who belongs to the statistical faculty and who had observed the project was really impressed about it. Since December 2012 I am the owner of prophetr.de and prophetr.com which were intended to host the social network, but it is a classical 80%/20% project status. The application misses LDAP authorization and synchronization for usage in enterprise environments, the user interface and design is pragmatical and not very user friendly and so on.

A few months after I finished the Bachelor project I read an article in the c’t. If remember correctly they were from Austria and got a lot of money for building a social forecasting network like mine. This was more or less the reason why I had abandoned the project for the last two years.

Drop me a line at me[at]schakko[dot]de if you are interested in more information.

ExceptionHandler of @ControllerAdvice is not executed

It happened again: after writing about some issues caused by different JVM class-loader order a similar problem occured on Friday. One of my collagues (Dev-A) asked me to look into a problem the team had. Because of unknown reasons the Spring Boot based application did not return a serialized JSON error object after a @Valid annotated controller method parameter had been validated.

@Controller
public class MyController {
	// Validator for MyDto (MyDtoValidator) got called
	@RequestMapping("/validate")
	public @ResponseBody MyData myMethod(@Valid MyDto myDto) {
		return new MyData()
	}
}

An @ControllerAdvice annotated class transformed any validation error into a new exception. This has been done to unify the validation errors when using Spring Data REST and Spring MVC validation.

@ControllerAdvice
public class ValidationErrorHandlerAdvice {

	private MessageSourceAccessor messageSourceAccessor;

	@Autowired
	public ValidationErrorHandlerAdvice(MessageSourceAccessor messageSourceAccessor) {
		Assert.notNull(messageSourceAccessor, "messageSourceAccessor must not be null");

		this.messageSourceAccessor = messageSourceAccessor;
	}

	@ExceptionHandler({ MethodArgumentNotValidException.class })
	@ResponseStatus(HttpStatus.BAD_REQUEST)
	@ResponseBody
	public RepositoryConstraintViolationExceptionMessage handleValidationErrors(Locale locale,
			MethodArgumentNotValidException exception) {
		// this method should be called if the validation of MyController.myMethod had failed
		return produceException(exception.getBindingResult());
	}

	@ExceptionHandler({ BindException.class })
	@ResponseStatus(HttpStatus.BAD_REQUEST)
	@ResponseBody
	public RepositoryConstraintViolationExceptionMessage handleValidationErrors(Locale locale,
			BindException exception) {
		return produceException(exception.getBindingResult());
	}

	private RepositoryConstraintViolationExceptionMessage produceException(BindingResult bindingResult) {
		return new RepositoryConstraintViolationExceptionMessage(
				new RepositoryConstraintViolationException(bindingResult), messageSourceAccessor);
	}
}

All in all, the controller advice itself looked fine to me, especially as the code is easy to understand and has been used in other projects too without any problems.

Empty HTTP response body

Nevertheless the behavior was mysterious:

  • When calling /validated in the browser, the custom validator for MyDto so the controller method got definitely hit. Nevertheless none of the exception handlers in the ValidationErrorHandlerAdvice got called. To make it more mysterious the HTTP response Spring generated did only consist of the HTTP status code 400 (Bad Request) without any character in the HTTP response body. The response body was completely clear.
  • Another developer (Dev-B) uses Linux as operating system. On his machine the code above worked without any problems and returned the expected HTTP status code 400 with the serialized JSON validation error object.

Dev-A has a Windows based machine. When he had called the “/validated” endpoint on Dev-Bs host the repsonse body contained the serialized validation error. In return, when Dev-B (Linux) had called “/validated” on Dev-As machine (Windows) the response body was empty.
I checked the HTTP request headers of both browsers but they were more or less the same and did not have any influence on any HTTP pre-filters Spring had registered. Both environments uses the Oracle JDK with different update releases (u43 vs. u63). Patching both JDKs to the same level I wanted to try at last as it seemed unlikely to be the reason.

Debugging session

I started to debug through the Spring Framework and realized that the order in which the registered exception handlers got checked for their responsibility of the current occured exception was completely different. On Dev-Bs machine the ValidationErrorHandlerAdvice were the first in the list, on Dev-A the first responsible exception handler was located in ResponseEntityExceptionHandler.
After stepping further through ResponseEntityExceptionHandler it made absolutely sense that the response body was empty on Dev-As machine. But it does not made any sense that the ResponseEntityExceptionHandler got loaded in the first place.

After searching for more @ControllerAdvice annotated classes in the project I found this piece of code:

@ControllerAdvice
public class CustomErrorController extends ResponseEntityExceptionHandler {
	@ExceptionHandler()
	public ModelAndView notFound(HttpServletRequest req, Exception exception) {
		LOG.info(exception.getMessage());
		ModelAndView mav = new ModelAndView();
		// ... not so important ...
		return mav;
	}
}

Okay, at least the exception handler of ResponseEntityExceptionHandler was introduced without any Spring magic.

Fixing the problem

During debugging the initialization phase of Spring I saw that the order of the detected controller advices was different between both systems: CustomErrorController got registered before ValidationErrorHandlerAdvice on Dev-A and vice versa on Dev-B. As the wrong behavior only occured on Windows machines I assume that the underlying component scan is responsible for the different order.

In the end the fix for this solution was easy. I annotated both controllers with @Order and gave the ValidationErrorHandlerAdvice a higher precedence than CustomErrorController.

How to fix NoSuchMethodError or NoSuchMethodException

Yesterday my team had the situation that a deployment failed with a NoSuchMethodError, specifically the method com/google/common/collect/ImmutableList.copyOf could not be found while querying the Confluence REST API.

NoSuchMethodEror and NoSuchMethodException occur of obvious reasons: a method should be called during runtime but the providing class does not contain the method.

NoSuchMethodExceptions is thrown when the JVM tries to make a call to a method through Java Reflection API. NoSuchMethodError is thrown when the compiled Java code directly calls the method without using the Reflection API.
Because of its nature the reason for a NoSuchMethodException can be a syntactical issue (e.g.misspelled method name in getDeclaredMethod). If you receive the exception during development, please check the correct spelling of the method name you try to call through reflection.

There are mostly two reasons why this error occurs during runtime:

  • The method signature (method name and expected parameters) does exist nowhere in your classpath. There could be an issue in your deployment / packaging phase. For a simple web project which is packaged through Maven this is very unlikely. But if you try to use overlays with classes outside of your POM definition, there could your problem be located.
  • The method signature does exist mulitple times in your classpath. It means, you have different versions of the class in your classpath. The classes could have the same method names but can differ in the parameter list.
    It highly depends upon on the environment which of the classes in JAR files have precedence. There is no such JVM specification that a classloader has to either fetch JARs in alphabetical or last-touched order or use a first-come/first-serve last-come/first-serve order. For example, JAR files are loaded in Tomcat until <= 7 in alphabetical order. Tomcat 8 let the filesystem make the decision which JAR comes first (Order of loading jar files from lib directory).

To identify the source of the problem, navigate to the main classpath directory of your application (e.g. WEB-INF/lib) and execute

for jar in *.jar; do for class in $(jar -tf $jar | grep $CLAZZ.class | sed 's/.class//g'); do javap -classpath $jar -s $class | grep -A 2 $METHOD && echo $jar.$class; done; done

Replace $CLAZZ with the name of the class and $METHOD with the name of the method. The shell script above searches for every occurence of the method inside any of the JARs and prints out the different signatures.

  • If there is no result, you hit the first case: your deployment script did not include the required dependency.
  • If there are multiple results from different JAR files, you have to compare the stacktrace of your application logs with the output of the script. Check the dependency hierarchy of your Maven POM and exclude the version not containing the expected method signature.

In our case, I had mistakenly included google-collections-1.0 and guava-11.0.2 in a referenced JAR which both provide ImmutableList. google-collection is the older dependency and does not contain the copyOf method. In the development environment, the (Spring Boot) application has been always executed through the embedded application server. In production, the WAR was deployed inside a Tomcat 8 container. In the end we removed the google-collections from the referenced JAR and the issue has been fixed.

One last word from the Tomcat Bugzilla by Mark Thomas:

Applications that depend on JARs being searched for classes in a particular order are broken and should be fixed.

Collecting and visualizing metrics with statsd, InfluxDB and Grafana on Fedora 22

My employer NeosIT offers a web based SMS notifiyng solution for organizations with security roles named ZABOS. In the last months we extended the ZABOS application to support digital alerting through POCSAG. After some problems with a third party component we implemented the ability to collect all POCSAG telegrams delivered in the near circumcircle and to notify the authorized recipients by SMS. Most of the incoming telegrams are discarded because they are not assigned in our database. But nevertheless I was interested in graphical representation of all incoming POCSAG messages, and additionally in a comparision to alerts sent with ZVEI, an analogue notification protocol. The ZABOS application log file contains all relevant information, which I wanted to extract.

Setting the stage

Our current infrastructure is based upon Fedora systems and some CentOS boxes. A central Logstash server collects incoming log messages through the Lumberjack input filter. After reviewing possible alternatives I had decided to implement statsd, InfluxDB and Grafana.

InfluxDB

InfluxDB is an open-source distributed time-series database which stores points in time and assigns key/values to it. Installing it on Fedora 22 is easy: get the latest RPM, install it and open the TCP ports:

wget https://s3.amazonaws.com/influxdb/influxdb-0.9.4.2-1.x86_64.rpm
sudo dnf install ./influxdb-0.9.4.2-1.x86_64.rpm</code>

# open network ports
# 8083: admin GUI port
sudo firewall-cmd --add-port=8083/tcp --permanent
# 8086: REST API
sudo firewall-cmd --add-port=8086/tcp --permanent
sudo firewall-cmd --reload

systemctl start influxdb
journalctl -f -u influxdb

After installing the RPM, navigate to http://localhost:8083 and set up a new database. The screenshots in the official documentation are slightly outdated, so use the query input:

CREATE DATABASE "demo"
CREATE USER "demo" WITH PASSWORD 'demo'

Make sure, that you can although open the URL http://localhost:8086/ping. It should return a valid HTTP 204 response.

statsd

statsd is node.js service which collects time series, provided through UDP or TCP. Most producers, e.g. Logstash, provide a statsd interface. statsd itself is pluggable and has a backend plug-in for InfluxDB. Every incoming time series is forwarded to the InfluxDB instance.

# get required packages
sudo dnf install nodejs npm git
cd /opt
sudo git clone https://github.com/etsy/statsd.git
cd statsd

# download InfluxDB backend
npm install statsd-influxdb-backend -d

# open network ports
firewall-cmd --add-port=8125/tcp --permanent
firewall-cmd --add-port=8125/udp --permanent
firewall-cmd --reload

# make configuration directory an copy example configuration
mkdir /etc/statsd/
cp exampleConfig.js /etc/statsd/config.js

# create a user
adduser statsd
# add systemd unit
vi /etc/systemd/system/statsd.service

The statsd.service file contains the unit definition for systemd. I mostly used the sample given at digitalocean.com:

[Service]
ExecStart=/usr/bin/node /opt/statsd/stats.js /etc/statsd/config.js
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=statsd
User=statsd
Group=statsd
Environment=NODE_ENV=production</code>

[Install]
WantedBy=multi-user.target

After saving the unit definition, edit the /etc/statsd/config.js:

{
influxdb: {
version: 0.9, // !!! we installed 0.9
host: '127.0.0.1', // InfluxDB host. (default 127.0.0.1)
port: 8086, // InfluxDB port. (default 8086)
database: 'demo', // InfluxDB database instance. (required)
username: 'demo', // InfluxDB database username. (required)
password: 'demo', // InfluxDB database password. (required)
flush: {
enable: true // Enable regular flush strategy. (default true)
},
proxy: {
enable: false, // Enable the proxy strategy. (default false)
suffix: 'raw', // Metric name suffix. (default 'raw')
flushInterval: 1000 // Flush interval for the internal buffer.
// (default 1000)
}
},
port: 8125, // StatsD port.
backends: ['./backends/console', 'statsd-influxdb-backend'],
debug: true,
legacyNamespace: false
}

If you miss the version property, statsd-influxdb-backend uses the old protocol version. 0.9 is incompatible with prior versions, so you will receive HTTP 404 errors after forwarding messages from statsd to InfluxDB.

# enable service
systemctl enable statsd
systemctl start statsd

journcalctl -f -u statsd

Logstash

In our special case I had to use the logstash-forwarder to forward the ZABOS application log to Lumberjack. To be compatible with our existing Logstash infrastructure, I configured a special input filter to extract POCSAG RICs and ZVEI series from the ZABOS log file. The filter itself is out of scope of this blog entry.

The statsd output filter for Logstash provides the ability to send extracted log data to statsd. The configuration is straight forward:

filter {
 file {
  # log extraction logic skipped
 }
}

output {
 if [pocsag_subric] {
  statsd {
   host => "127.0.0.1"
   port => 8125
   increment => "pocsag.incoming.%{pocsag_subric}"
  }
 }
}

This conditional output filter increments a key with the given POCSAG SubRIC if a pocsag_subric field is not empty.

After manual running the Logstash agent with the configuration above, Logstash sends all found POCSAG SubRICs to the local statsd instance which in turn forwards it to InfluxDB.

One note about logstash-output-influxdb: it supports a direct output into InfluxDB without using statsd, but it supports only the old API piror 0.9. In addition, most time series producers are sending in a statsd format. So the setup I described is more complex but you gain an advantage in flexibility.

Grafana

At this point I was able to forward all POCSAG telegrams to InfluxDB. To visualize the collected information, I installed Grafana. The Grafana client connects to different backend databases like OpenTSDB, ElasticSearch and InfluxDB to produce time series based graphs. Installing Grafana can be accomplished with yum/dnf:

sudo dnf install https://grafanarel.s3.amazonaws.com/builds/grafana-2.1.3-1.x86_64.rpm</code>

# open ports
firewall-cmd --add-port=3000/tcp --permanent
firewall-cmd --reload

systemctl enable grafana-server
systemctl start grafana-server

After navigating to http://localhost:3000 you need to set up a new data source: Click on Data Sources > Add New and enter the credentials to your InfluxDB instance.

Important information: You have to enter a FQDN as database URL and not http://localhost! Your browser will directly connect to the InfluxDB backend, so your browser must have access to the InfluxDB REST endpoint.

If you need professional consulting or development services for the topics above, just look on our website or leave us a message at info[at]neos-it[dot]de.

Slow RAID performance with our new Linux storage

During the last months we periodically experienced performance problems with our storage system. Investigating the cause for the slow performance was problematic as we did not have direct shell access and could only rely on crippled information from the web GUI. Yesterday my collagues migrated the storage system from the proprietary operating system solution to Fedora 22.

After some problems with LVM and directory permissions for Samba, the storage went back online today in the morning. We noticed really fast, that our steady slow storage transformed into a “sometimes running fast, sometimes really slow” machine. One thing was, that copying ISOs from and to a Samba share resulted into really bad I/O performance on every VM which uses mounted iSCSI disks. For example, during a copy through SMB our internal JIRA and Confluence were no longer usable as the proxy timed out. Both VMs (JIRA/Confluence and proxy) were stored on the iSCSI disks provided by the storage.

We excluded the Samba daemon and the operating system as root causes for this issue. We tested the performance with help of dd and compared the results with Thomas Krenns. Our eyes exploded as we saw that the performance of our RAID was a magnitude (s)lower than the reference values. Even a software RAID were four times (!) faster than our hardware RAID. For direct read/writes we received constant slow throughput of 40 MByte/s. WTF? We thought about this issue and came to the conclusion that it had to be something with the LSI 9261-8i RAID controller of the storage. A defect on the controller itself seemed to be unlikely. But then we realized, that the Backup Battery Unit of the RAID controller had a defect. Could it be that an erroneuos battery could have such an impact? And indeed, Thomas Krenn supported this thesis: A defect or disabled BBU ensures that the RAID cache gets disabled and with that the performance.

Our BBU replacement is ordered and I am optimistic that we will fix the performance issue. I’ll update this blog post as soon as we have the new battery installed.

Update 2015-09-14: BBU has been installed. The RAID performance is fine now.

Wie ich einen HNO-Arzt in Wolfsburg suchte und IT-Probleme fand

In diesem Blog-Eintrag will ich meine Erkenntnisse schildern, die ich während meines Tinnitus gesammelt habe.
For the non-german-readers of this blog: This blog post deals with the experiences I made during my tinnitus.

Zur Vorgeschichte

Am Dienstag vor genau zwei Wochen wachte ich im frühen Morgen von einem hohen Fiepen im Ohr auf. Da ich bis dahin noch nie Probleme mit meinen Ohren hatte, wartete ich den Dienstag ab und hoffte auf Besserung. Leider wurde es auch am Mittwoch nicht besser. Ich muss dazu sagen, dass mich z. B. bereits leise Geräusche beim Einschlafen stören. So nehme ich das Entladen von Kondensatoren als extrem nervig wahr. Der Ton in meinem Ohr bzw. Kopf hatte in etwa die gleiche Frequenz. Bevor ich also verrückt werden würde, entschloss ich mich, einen Arzt aufzusuchen.

Die Suche nach einem Hals-Nasen-Ohren-Arzt

Während der Sommerferien im Allgemeinen und der Werksferien von Volkswagen im Speziellen, ist hier in Wolfsburg wenig los. Das trifft auch auf die Besetzung der Arztpraxen zu. Ich versuchte den halben Mittwoch eine Arztpraxis zu erreichen. Highlight war unter anderem, dass ich in vier Arztpraxen von deren Anrufbeantwortern auf die nächste Vertretung verwiesen wurde, von denen die letzte Vertretung nicht erreichbar war. Teilweise waren bei Google die Telefonnummern der Praxen veraltet oder die Öffnungszeiten stimmten nicht. Hier bekam ich dann auch die erste Idee, eine simple Webseite zu entwickeln, auf der die Ärzte ihre Vetretung eintragen können.

Letztendlich erreichte ich dann telefonisch doch noch eine Arztpraxis, in der mir gesagt wurde, dass diese keinen neuen Patienten aufnehmen. Ich solle doch in die Notaufnahme fahren. Die zweite Anforderung für die Webseite war also ein simple Checkbox: “Nimmt neue Patienten auf”.

Verwechslung in der Notaufnahme

Am Donnerstag Vormittag fuhr mich meine Frau in die Notaufnahme. Ich bekam ein Bändchen mit Barcode um das Handgelenk und wurde in die HNO-Abteilung geschickt. Auf dem Weg dahin überlegte ich, dass es eigentlich ganz cool wäre, wenn man mit dem Bändchen sich seinen Weg durch das Klinikum zeigen lassen könnte. Sei es über einen RFID anstelle des Barcodes oder einen RasPi mit Barcode-Leser und Mini-LCD in den Gängen.

Nachdem ich in der HNO-Abteilung ankam, bin ich direkt verwirrt gewesen. Nirgendwo ein Arzt oder Zimmer, wo ich mich hätte anmelden können. Im Wartezimmer wurde mir gesagt, dass irgendwann eine Ärzten vorbeikommt. Dem war auch glücklicherweise so. So ein automatisierter Check-In wäre nett gewesen…

Ich wurde überraschend schnell ins Behandlungszimmer gerufen. Insgesamt war ich bis dahin nur ca. 45 Minuten im Klinikum. Im Behandlungszimmer fragte mich die Ärztin, ob die Ohrschmerzen sehr stark seien. Ich war wieder verwirrt. Das Geräusch war nervig, aber nicht unbedingt schmerzhaft. Wir stellten beide fest, dass es noch einen weiteren Patienten mit meinem Nachnamen gab. Glück für mich: ich durfte im Behandlungszimmer bleiben und sparte somit jede Menge Zeit. Was mich natürlich andererseits schockierte: Was wäre passiert, wenn sie mir ohne weitere Nachfragen Tabletten gegeben hätte, die überhaupt nicht zu meinen Symptomen passten? Mit einer gezogenen Nummer im Wartezimmer und einem Abgleich mit meinem Handgelenkbändchen wäre das alles erst gar nicht passiert.

Hörtest

Mit meiner Krankenakte und meinem Handgelenkbändchen wurde ich zum Hörtest geschickt. Während sich die zuständige Ärztin auf ihre langsame PC-Software wartete und ich sah, wie der Ladeanzeige im LCD des Hörtestgeräts bedrohlich blinkte, irritierte mich der laute PC-Lüfter. Der gesamte Raum war so isoliert, dass man von draußen absolut nichts hörte. Aber dafür brummte der PC. Nervig. Der Hörtest sollte gerade losgehen, als plötzlich der Akku des Hörtestgeräts leer war. Hätte das Ding gepiept, hätte man das wahrscheinlich schon vorher gemerkt.

Der Hörtest verlief ohne nennenswerte Unterbrechungen. Ich drückte den Knopf, wenn mich mal das Brummen des PCs nicht ablenkte. Alles in allem hatte ich wohl ein glückliches Händchen. Mir wurde das Gehör eines jungen Gottes bescheinigt. Ein junger Gott mit Tinnitus oberhalb von 8 kHz.

Rezepte

Ich durfte mit meinem (ausgedruckten) Testergebnis in der Hand wieder runter in die HNO-Abteilung, wartete kurz und wurde herein gerufen. Die behandelnde Ärztin verschrieb mir Tabletten. Ich bekam einen Zettel mit dem Rezept in der Hand und sollte diesen in der Notaufnahme abgeben. Außerdem erfuhr ich, dass mich nun der HNO, bei dem ich als letztes angerufen hatte, aufnehmen musste.

Den Tag im Klinikum beendete ich damit, dass mir in der Notaufnahme der Zettel mit dem Rezept durch ein wirkliches Rezept ersetzt worden war und ich wurde entlassen.

Terminklärung beim HNO

Im Anschluss an den Krankenhausbesuch rief ich die HNO-Praxis an. Leider nur Anrufbeantworter ohne Ansage. Scheinbar war für heute bereits der Arbeitstag zu Ende. Ich rief am nächsten Tag noch einmal früh an, ärgerte mich über eine äußerst schlechte Warteschleifenmusik, wurde als Patient aufgenommen und bekam direkt für den folgenden Montag einen Termin.

An dieser Stelle fragte ich mich, warum so wenig Arztpraxen die Buchung von Terminen über das Internet ermöglichen. Die Dienste sind ja vorhanden. Für mich als Person, die äußerst ungern telefoniert, wäre das ein Segen. Außerdem wären die Arthelferinnen vermutlich deutlich weniger gestresst. Viele der Telefonate drehen sich vermutlich (?) um Terminabsprachen.

Termin beim Arzt

Montag Nachmittag suchte ich die Arztpraxis auf. Wegen schlechterer Beschilderung liefen ich und ein weiterer Patient bis in die 4. Etage – nur um festzustellen, dass die Praxis im Erdgeschoss war (Memo für die Webseite: Etage und Barrierefreiheit muss sich eintragen lassen).

Nach einer sehr kurzen Wartezeit von 10 Minuten war ich an der Reihe. Ich durfte wieder einen Hörtest absolvieren – den ich auch diesmal mit gutem Drückergebnis meisterte. Der Arzt schrieb mich für den Rest der Woche krank.

Eine Woche später

Die “Erlebnisse” während des Tinnitus beschäftigen mich immer noch. Als Softwareentwickler bin ich es gewohnt, Probleme zu identifizieren und Prozesse zu optimieren. Da sich die Ärzte aber keine Sorgen um einen Mangel an Patienten machen müssen, wird es wohl schwer werden, dort etwas zu optimieren. Der Druck und das Ärgernis besteht eher auf Seite des Kunden (Patienten).

An dieser Stelle muss ich auch noch einmal sagen, dass ich mich von den beiden Ärztinnen im Krankenhaus gut und nett behandelt fühlte. Diese mussten alleine (!) beide Abteilungen leiten, da der Rest der Kollegen krank war. Auch der HNO-Arzt und seine Mitarbeiterinnen waren nett und sympathisch. In diesem Sinne: +1 für die Ärzte, -1 für Prozesse und -1 für das Gesundheitssystem.

Integration testing the mail dispatching in Laravel 5.1

When using the Mail facade in Laravel it is not so easy to test the output of the parsed mail template. Like http://stackoverflow.com/questions/31120567/unittesting-laravel-5-mail-using-mock I received the error Method Mockery_0__vendor_Swift_Mailer::getTransport() does not exist on this mock object. I ended up in listening to the mailer.sending event:

    public function testRegistrationMailIsSend_afterSubmittingForm()
    {
        // flag for closure has been called
        $mailerAssertionHasBeenCalled = false;

        // receive every Event::fire method and pass the reference from the outer scope into the closure
        Event::shouldReceive('fire')->andReturnUsing(function($event, $params) use (&$mailerAssertionHasBeenCalled) {
            // filter only the mailer.sending event
            if ($event != 'mailer.sending') {
                return;
            }

            // reference will be modified
            $mailerAssertionHasBeenCalled = true;
            // Swift_Message; Illuminate\Mail\Mailer::sendSwiftMessage
            $msg = $params[0];

            $this->assertEquals('Verify your e-mail account', $msg->getSubject());
            $recipients = $msg->getTo();

            $this->assertTrue(array_key_exists('my@domain.com', $recipients));
            $verificationKey = Registration::first()->verification_key;

            // assert registration key is present in parsed template
            $this->assertContains('/registration/verify-email?key=' . $verificationKey, $msg->getBody());
        });

        // visit our registration controller
        $this->visit('/registration')
            ->submitForm('Register', ['email' => 'my@domain.com'])
            ->see('Mail has been sent');

        // make sure that our closure has been called
        $this->assertTrue($mailerAssertionHasBeenCalled);
    }

Seeding the database for integration tests in Laravel

In my last post I wrote about how to define the test environment for database integration tests. Now I want to describe, how the database can be populated with test or stam data.

First of all, every test inherited from the generated TestCase class executes the Artisan migrate command (TestCase::prepareForTests()). The trait Illuminate\Foundation\Testing\DatabaseMigrations is only required, if all tables of the schema have be dropped after the test execution. This could be necessary for system tests in which the whole schema is populated with test data. For “simple” integration tests using transactions should be sufficient.
One word about the traits DatabaseMigrations and DatabaseTransactions: Both are executed by PHPUnit before every test method. PHPUnit scans all methods in the test class for the documentation annotations @before, @after, @beforeClass and @afterClass (PHPUnit_Framework_TestCase::runBare() and PHPUnit_Util_Test::getHookMethods()). The traits both uses the @before annotation to setup the context:

trait DatabaseMigrations
{
    /**
     * ckl: this annotation advises PHPUnit to run the trait before every test case
     * @before
     */
    public function runDatabaseMigrations()
    {
        $this->artisan('migrate');

        $this->beforeApplicationDestroyed(function () {
            $this->artisan('migrate:rollback');
        });
    }
}

With this in mind the seeding of the database can be placed on two different locations. First of all, the non-working approaches. Putting the seeding inside the setUp() or prepareForTests() method does not work, because at runtime there is no active transaction:

    public function setUp() {
        parent::setUp();
        // ckl: wrong place; this method is called on startup. The seeding is outside an active transaction
        // $this->seed('QualificationsTableSeeder');
    }

    public function prepareForTests() {
        $sut = $this->app->make('\App\Services\ExperienceService');
        // ckl: wrong place; this method is called on startup. The seeding is outside an active transaction
        // $this->seed('QualificationsTableSeeder');
    }

Using a pure seeder method with @before does although not work. ReflectionClass::getMethods(), which is used by PHPUnit, returns at first all “native” / non-traited methods and after that the traited methods:

    /**
     * ckl: local methods have higher precedence than traits, so this method is called *before* the DatabaseTransactions trait has been called
     * @before
     */
    public function seedTables() {
        $this->seed('MyTableSeeder');
        $testInstance = factory('App\User')->create();
    }

By starting a transaction inside the seedTables(), we have the seeding inside a running transaction:

    /**
     * @before
     */
    public function seedTables() {
        // already start the transaction and let DatabaseTransactions only rollback the transaction. This does only work on the first test case
        DB::beginTransaction();
        $this->seed('MyTableSeeder');
    }

The rollback is done by the DatabaseTransactions trait. It is no problem having two DB::beginTransaction() calls and only one rollback. MySQL does not allow nested transactions, so Laravel starts only the transaction on the first DB::beginTransaction() call. Every other invoke only increments a counter.
Laravel only executes the rollback, if all DB::rollBack() methods have been called. The seedTables() has to look like

    /**
     * @before
     */
    public function seedTables()
    {
        DB::beginTransaction();

        $this->beforeApplicationDestroyed(function () {
            $this->app->make('db')->rollBack();
        });

        $this->seed('MyTableSeeder');
    }

A much cleaner solution is to call the seedTables() method inside every test case:

    public function seedTables() {
        $this->seed('MyTableSeeder');
    }

    public function testSeeding() {
        // ckl: transaction has been started
        $this->seedTables();

        $this->assertTrue(true);
    }

Test environments for database integration tests in Laravel 5

As far as I have read, in Laravel 4 you could define your database integration test environment by adding a testing/database.php or .env.testing.php file containing your configuiration. In Laravel 5 both ways does no longer work. To switch your environment you have two options:

  1. Put both configuration definitions (testing, dev/production) inside your config/database.php:
        'connections' => [
    
            'sqlite' => [
                'driver' => 'sqlite',
                'database' => storage_path('database.sqlite'),
                'prefix' => '',
            ],
            'mysql' => [
                'driver' => 'mysql',
                'host' => env('DB_HOST', 'localhost'),
                'database' => env('DB_DATABASE', 'schema'),
                'username' => env('DB_USERNAME', 'root'),
                'password' => env('DB_PASSWORD', 'root'),
                'charset' => 'utf8',
                'collation' => 'utf8_unicode_ci',
                'prefix' => '',
                'strict' => false,
            ],
            'mysql_testing' => [
                'driver' => 'mysql',
                'host' => env('DB_HOST_TEST', 'localhost'),
                'database' => env('DB_DATABASE_TEST', 'schema_test'),
                'username' => env('DB_USERNAME_TEST', 'root'),
                'password' => env('DB_PASSWORD_TEST', 'root'),
                'charset' => 'utf8',
                'collation' => 'utf8_unicode_ci',
                'prefix' => '',
                'strict' => false,
            ],
    

    and store the configuration defaults in your .env file by adding the configuration keys DB_HOST_TEST, DB_DATABASE_TEST and so on. Eventually you must modify your base TestCase::createApplication to use the mysql_testing connection:

        /**
         * Creates the application.
         *
         * @return \Illuminate\Foundation\Application
         */
        public function createApplication()
        {
            putenv('DB_CONNECTION', 'mysql_testing');
    
            $app->make(Illuminate\Contracts\Console\Kernel::class)->bootstrap();
    
            return $app;
        }
    
  2. I prefer the second solution: Copy the .env file to .env.testing, modify the settings and override the default Dotenv file .env by modifying your base TestCase::createApplication:
        public function createApplication()
        {
            $app = require __DIR__.'/../bootstrap/app.php';
            // ckl: use .env.testing in favor of .env; clear separation between configuration values and configuration definition
            $app->loadEnvironmentFrom('.env.testing');
    
            $app->make(Illuminate\Contracts\Console\Kernel::class)->bootstrap();
    
            return $app;