Results for category "Datenbanken"

20 Articles

Fixing “Unable to obtain lock on store lock file” when using the embedded database of Neo4j

After some years without using Neo4j I had the chance to use the graph database in my current project. Neo4j was a good fit because it makes it really easy to prototype an idea and the project itself relies heavily upon tree structures. Modelling tree structures in relational database management systems is always a PITA and takes a certain time to implement in the backend. The good news is that trees are only directed graphs which can be easily modelled with Neo4j.

As a Spring guy I used the latest spring-data-neo4j 4.1.x release to connect the application to the graph database. For easier prototyping I used the embedded driver.

import org.neo4j.ogm.session.SessionFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.neo4j.repository.config.EnableNeo4jRepositories;
import org.springframework.transaction.annotation.EnableTransactionManagement;

@EnableTransactionManagement
@EnableNeo4jRepositories(basePackages = "de.schakko.neo4j")
@Configuration
public class Neo4jConfiguration extends org.springframework.data.neo4j.config.Neo4jConfiguration {
	@Override
	public SessionFactory getSessionFactory() {
		return new SessionFactory(getConfiguration(), "de.schakko.neo4j");
	}
	
	@Bean
	public org.neo4j.ogm.config.Configuration getConfiguration() {
		org.neo4j.ogm.config.Configuration config = new org.neo4j.ogm.config.Configuration();
		
		config.driverConfiguration()
		.setDriverClassName("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
		.setURI("file:///var/tmp/graph.db");
		
		return config;
	}
}

When running the integration tests (or after automatic reloading of the application by spring-boot-devtools) the application failed to reconnect to the database:

org.neo4j.kernel.StoreLockException: Unable to obtain lock on store lock file: file:///var/tmp/graph.db

In the end I fixed the error by using

import org.junit.After;
import org.junit.runner.RunWith;
import org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver;
import org.neo4j.ogm.service.Components;

class ItemRepositoryIT {
	@After
	public void tearDown() {
		((EmbeddedDriver) Components.driver()).close();
	}
// ...
}

in the integration tests.

MSSQL: Database * already exists. Choose a different database name. Cannot attach the file * as database.

I am currently playing around with ASP.NET and its Entity Framework. At some point I wanted to execute all my migrations against a new local SQL Express database. After I had dropped the database in SQL Management Studio, the Update-Database command of the Entity Framework failed with the error “Database ‘$path.mdf’ already exists. Choose a different database name. Cannot attach the file ‘$path.mdf’ as database” (German translation: “Die ‘$path.mdf’-Datenbank ist bereits vorhanden. Wählen Sie einen anderen Datenbanknamen aus. Die Datei ‘$path.mdf’ kann nicht als ‘$path’-Datenbank angefügt werden.”).

SQL Server Management Studio did not longer show the database as present. A manually executed DROP DATABASE SQL statement only showed that there was not such a database. After checking some other possible error sources (machine.config, web.config and so on), I ended up with downloading sseutil from http://www.microsoft.com/download/en/details.aspx?DisplayLang=en&id=3990 and dropped the database by hand:

> sseutil -s \.SQLEXPRESS -l # shows all databases in the local SQL Express instance
1. master
2. tempdb
3. model
4. msdb
5. $failed_database

> sseutil -s \.SQLEXPRESS -d name=$failed_database
Failed to detach '$failed_database'

Although I received the detachment error, the database was no longer registered and I was able to execute the Update-Database statement without any problems

Hibernate uses wrong schema during schema validation

Recently I struggled upon the same problem, this guy described. Our Oracle database instance contains multiple schematics with almost the same structure. Every developer has it’s own schema for unit and integration tests. On application startup the Hibernate schema validator calls the DatabaseMetaData.getTables() for every linked entity. The method returns the first table which could be found in any schema. The returned tables are by default ordered by schema name. Side node: I would expect that the home schema of the current user would be prefered. This leads to situation that sometimes the validation fails: a user has already migrated his own schema (schema name app_user_unittest) but the schema for the build server (schema name app_build_unittest) still has the old schema version.

Overwriting DatabaseMetaData.getTables() method is not possible as it resides in the Oracle JDBC driver. Instead, you can use the environment variable hibernate.default_schema which points to to prefered schema. Depending on your development environment, the variable could be set during application startup by the application itself or by a system property through your application server.

WSUS: Moving from Windows Internal Database to external SQL Server 2008 and receiving “Token-based server access validation failed with an infrastructure error”

Today I had to move the WSUS internal database to one of our backend database servers. Microsoft has a good instruction how to do this, nevertheless I ran into a problem.

Microsoft SQL Server 2008 did not allow me to add the machine account of our WSUS frontend server (let me call it WSUS-SRV), so I created a new Active Directory security group called WSUS Administrators containing the WSUS-SRV machine account. This security group I gave the permission to access the database.

After starting the IIS Admin Service and Update Services the database backend server showed the error Token-based server access validation failed with an infrastructure error (event-id 18456). Oops.
One workaround  would have been to disable the UAC (http://blogs.msdn.com/b/sqlserverfaq/archive/2010/10/27/troubleshooting-specific-login-failed-error-messages.aspx). Not a solution I was very keen about.

I fixed the problem by creating a local security group on the database server and adding the maching account of WSUS-SRV into it.

Databases in developers environment – Versioning and migrations

One of the most annoying things I get in touch with software development beside the question How can I format this date is How can I put my database under version control?

If you have not already read K. Scott Allens Database series, do it now.
At first follow his three rules:

  1. Never use a shared database server for development work.
  2. Always Have a Single, Authoritative Source For Your Schema
  3. Always Version Your Database

Requirements

Ask your customer which database and which environment is used. Most of these questions should be answered by a well worked out requirement questionnaire.
Ask for

  • Which operating system is used
  • How is the operating system localized
  • Which database management system is installed
  • Which version and service packs of the DBMS are used
  • Which language version of DBMS is used

It might sound irrelavant but database management systems differs from version to version. Setup your development database server with exactly the same settings your customer uses.

Database environment

The ideal situation is that every developer owns at least two database instances on this server. One for unittests, one for integration tests.
Your continuous integration server should also own his two databases.
So you could have these databases:

  • $PROJECT_$DEVELOPER_unittest
  • $PROJECT_$DEVELOPER_integration
  • $PROJECT_CI_unittest
  • $PROJECT_CI_integration (used for Selenium)
  • $PROJECT_customer_integration

The last database should be used for database dumps which came back from your customer for debugging purposes.

Okay, most of this should be clear. Never use a shared database for development work.

Versioning

Why should you now versionize your database? Because of the same reason you versionize your code: Going back in time and retrace occuring bugs of your customer who uses an old but stable version of your program.

In my opinion there are two approaches to halfway achieve this goal:

  • Store every database object (table, view, stored procedure, function) in its own .sql file
  • Do database migrations like Ruby on Rails does

1 on 1

Storing every database object means that you have a directory structure like

/tables/tbl1.sql
/tables/tbl2.sql
/views/view1.sql
/views/view2.sql
/sps/stored_procedure1.sql
/functions/function1.sql

You get it. Every SQL script contains the CREATE statement for your database object.
This very charmant, because you can track the stuctural changes via your commit log.

The problem begins when you want to migrate your developer integration databases and the database of your customer. Stored procedures or functions can be easily replaced because you can use CREATE OR REPLACE syntax. Views can be replaced by DROP VIEW view1; CREATE VIEW view1 AS…
But – and that’s a big but – you have to compare the current database of your customer with your current table definition and write the ALTER TABLE statements by hand. Not so funny.
Additionally you must delete unused views, stored procedures or functions by hand. Otherwise you’ll get a grave with dead database objects which are no longer used.

I know none database tool which can automatically generate a 100% working update script without any pitfalls. And writing a DBMS independent tool on your own is a lot of work.

1 on n (migrations)

The second approach is to create database migration scripts for every completed database change.
At first the negative aspect: You wil get a lot of files and you can not easily make a git/svn diff of the structural change of your database object because the changes are distributed over many files.

The good news is: You don’t have to write additional ALTER statements on deploy time because they are already there. That means, that you just give your customer your migration SQL scripts, he executes the SQL scripts in a given order and will get the latest database version.

To weaken the problem of diffing different database versions, you can migrate one database to version n and another database to n+1. After that you make an SQL export and use diff (n).sql (n+1).sql.

A few days ago I came to the conclusion that there is no other effective way to achieve the goal, so I decided to further speculate about the second approach.

Implementing migrations

There were some requirements I thought about

  • The database development process and file naming have to be standardized
  • The current installed version of your database must be stored inside the database itself
  • The migration process must be ran easily
  • The migration process must support pre/post scripts
  • The migration process must support different scenarios (deploying on customer, no need for migrations on developer site – just install latest version, different databases)

Standardization was easy. Every project contains a folder named db which contains four sub folders

Directory structure of database migrations

Directory structure of database migrations

  • /coredata contains the core data which every database must contain.
  • /fixtures/[integrationtest|unittest] contains the fixtures which will be used in test cases
  • /handlers can be executed before or after migrations
  • /migrations itself contains the migration scripts

It is important that all files in /coredata, /fixtures and /migrations have an ongoing number. These must not be identical to your SVN revision. It is only important that they can be executed successively.
You can see that that I use the format YYYYMMDD_$DAYVERSION as ongoging identifier.

The database must store their migration version. This is needed for doing migrations. Every migration file greater than $DATABASE_VERSION must be executed.
In my structure above the schema migration table is executed first (20111219_001_schema_migration.sql) and contains only one statement

CREATE TABLE schema_migration (id int not null auto_increment, migration_on DATETIME, version char(20)  NOT NULL, filename LONGTEXT, PRIMARY KEY(id));

After these point I started to write a small prototype migration definition file which supported different scenarios (supporting different databases and servers, migration process, recreate process).

For this definition (see my first draft) I had to write my own parser or use Xtext, so I transformed it into an XML (see my template) sample. It seemed to be ideal for xsd.exe and C#/Visual Studio.
But after I had struggled with serialization problems I took a closer look to the XML file. It looked like… Ant. The processes were more or less the same.
I did a research and came to the conclusion that almost every requirement could be solved with Ant. I implemented a halfway working solution.
The build.xml currently does not insert the migration version after a file is applied. I will fix it ASAP.
Please note, that you need to have js.jar, bsf.jar (both for JavaScript), commons-logging.jar and ant-contrib.jar (for task) in your Ant classpath for using the build.xml.
The build.xml itself is designed for MySQL. You need to customize the build.xml for every DBMS you use. But this should be easy to implement.

Update: Jens Schauder published a post about Tips for Testing Database Code.

Gelöst: Encoding-Probleme mit MS SQL und PHP

Seit letzter Woche habe ich mich mit einem äußerst ominösen Problem beschäftigt – wär ja sonst auch langweilig: Ich habe für einen Kunden ein Import-Script geschrieben, dass aus mehreren Text-Dateien die Daten in eine MS SQL Server 2005-Datenbank importiert.
Bei unseren Tests hier in der Firma funktionierte alles wunderbar, es gab keine UTF-8- oder sonstigen Zeichensatz-Probleme – die Umlaute wurden korrekt dargestellt.

Nun meldete sich der Kunde und teilte uns mit, dass die Umlaute falsch dargestellt wären. Aus einem “für” wurde ein “fnr” u.s.w. Ein Schelm, wer jetzt denkt, es würde an UTF-8 liegen. Daran lag es nämlich nicht.

Der Kunde setzte einen amerikanischen Windows Server 2003 und ebenfalls einen amerikanischen Microsoft SQL Server 2005 ein – wir hingegen hatten die jeweils deutschen Versionen eingesetzt. Lag es an den unterschiedlichen Länder-Einstellungen? Nein, daran lag es nicht – wie wir nach dem Aufsetzen der gleichen Maschine feststellten.

UTF-8-Probleme auf Seiten PHP konnte ich ebenfalls ausschließen, da sonst substr() aus dem “für” ein “fn” hätte machen sollen.

Über die Lösung des Problems bin ich mehr durch Zufall gestolpert: Das Tool cliconfig.exe (zu finden unter %WINDOWS%system32) bietet unter dem Tab DB-Library Options die Option Automatic ANSI to OEM conversion. Auf unseren System war diese Option deaktiviert, beim Kunden hingegen aktiviert. Warum? Ganz einfach: die Datenbank beim Kunden lief vor einigen Monaten noch unter MS SQL Server 2000 und wurde dann auf SQL Server 2005 geupgradet. Die Standard-Einstellungen für o.g. hatten sich vom Versions-Wechsel anscheinend geändert und somit konnten wir eben nicht die exakt gleichen Bedingungen nachstellen.

Was für ein Gefrickel, aber: Wieder ein Problem weniger auf der Welt.

Update: Ganz wichtig in diesem Zusammenhang ist, dass das Umlautproblem nur auftritt, wenn das PHP-Script von der Kommandozeile aufgerufen wird. Der Aufruf des Scripts als ausgeführtes CGI-Script in einer Apache-/IIS-/…-Umgebung liefert die richtigen Resultate.

oracleDiff – a schema differential tool for Oracle

On friday last week I got a requirement from a co-worker. He asked me for a simple Java based application which compares two different revisions of Oracle database schemas. After some discussion about the sense of this tool, we made a quick requirement and workflow analysis.

  1. User extracts target database schema information with oracleDiff. Only column names and types are required. There is no need for exporting Stored Procedures, Functions, References and so on. Schema information should be stored as XML.
  2. User exports full table data with oracleDiff. The extracted data should be stored as binary or flat file – XML export for large data is too memory intensive.
  3. User does a manual upgrade of the changes with help of Oracle Enterprise Manager Console or other 3rd party tools.
  4. User imports exported data on upgraded database, oracleDiff asks for remapping of columns and informs about column type changes.

Yesterday I started the project and stumbled over two Oracle specific facts:

  1. DESC $TABLENAME does not work in PreparedStatements. You will receive ORA-00900 from Oracle JDBC driver. For retrieving table information you have to use SELECT * FROM user_tab_columns WHERE table_name = ?.
  2. As a long time MySQL programmer I used SHOW TABLES for retrieving all tables of a schema. This will also not work. You have to to use SELECT table_name FROM user_tables.

The current development status extracts the database schema information and compares two schemas. I hope that I will finish the tool within the next two weeks.

SQLite: Datumseinträge vergleichen

Das funktioniert in SQLite nicht (my_date ist vom Typ datetime):

SELECT * FROM table WHERE tb.my_date > now()

Stattdessen muss folgendes benutzt werden:

SELECT * FROM table WHERE julianday(tb.my_date) > julianday('now')

MySQL 5 und Views

Hier ein paar SQL-Aufrufe, die man für Views unter MySQL 5 braucht (von http://dev.mysql.com/doc/refman/5.0/en/views-table.html).

# list all views
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.VIEWS;
# describe view
# replace ? with view name
SELECT VIEW_DEFINITION
FROM INFORMATION_SCHEMA.VIEWS
WHERE
TABLE_NAME = ?;
# describe view prettyprinted
# replace ? with view name
SELECT REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(VIEW_DEFINITION, ",", "n"),
"from", "nfrom"), "where", "nwhere"), "join", "njoin"), "and", "nand")
FROM INFORMATION_SCHEMA.VIEWS
WHERE
TABLE_NAME = ?;

In wenigen Schritten von ISO-8859-1 zu UTF-8

Ich frickel momentan an der Umstellung meines Blogs und damit steht auch die Umstellung von ISO-8859-1 auf UTF-8 auf dem Plan.
Damit alles wunderbar funktioniert, hier die Vorgehensweise für eine MySQL 5-Datenbank:

  1. In PDT/ZS als Encoding des Projekts UTF-8 auswählen und Projekt neu builden lassen.
  2. Eventuell noch einmal mit Notepad++ sich die View-Scripte anschauen und manuell nach UTF-8 konvertieren (Format > Konvertiere zu UTF-8).
  3. Im head-Tag das obligatorische
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> 

    einfügen.

  4. Einen Dump der aktuellen MySQL-Datenbank ziehen
  5. Diesen Dump mit Notepad++ nach UTF-8 konvertieren (s.o.)
  6. Im Dump CHARSET=latin1 durch CHARSET=utf8 ersetzen
  7. Neuen Dump einspielen und eventuell beim Connecten mit PHP zur Datenbank SET NAMES ‘utf8’ aufrufen.
  8. Glücklich sein.