Divide and Conquer, Medieval II: Total War mod (DaC) experience

Divide and Conquer is a Medieval II: Total War Kingdoms mod in Lord of the Rings setting. Technically a "sub" mod of Third Age: Total War, but at this point it is fair to say it is a mod on it's own since it greatly imroves and expands almost every aspect of TaTW.

I have finished TaTW campaign 3 times before so when the nostalgia kicked in again I decided to give DAC a try.


Version: 4.6.7
Difficulty: hard (I personally find the "very hard" setting just a grind with no added value in this case, hard is plenty of a challenge)
Faction: Rohan
Style of play: try to follow the lore

Rohan Campaign playthrough – the beginning

I chose Rohan since I've already done High Elves, Dwarves and Gondor in TaTW. Events described here are as they unfolded in my playthrough but there can be major differences depending on faction politics.

You begin with a handful of known generals such as Theoden, Theodred, Eomer, Hama, Gamling and others. Your initial campaign will revolve entirely around defending and defeating Isengard which will attack almost immediatelly. You need to send all your available armies to Westfold (Foldburg) and defend it at all cost. Losing the region will set you back tens of turns so try to hold out. Place two general units since they are overpowered. You can do a bit of research on the side but most of your spending will go to replenishing the armies to defend wave after wave of attacks. Eventually Saruman himself may lead a siege and by killing Saruman the whole Isengard will fold into rebel territory.

In every captured settlement, the number one building you should always upgrade first is culture and then masonry. You need to convert non-native culture to your own and that takes tens of turns so you want to speed that up as much as possible.

You then proceed to expand further west and defeat the Dunlendings.

You should probably forge an alliance with Clans of Enedwaith to get some help here and secure your west border for the rest of the campaign. Create trade agreements with all the neighbouring nations, even some of the dark forces are acceptable in the beginning when you need to build up your income.

The next natural progression is to go further into the Misty Mountains and defeat the goblins there. At this point Moria will probably be divided so you can gift West Moria back to Dwarves of Khazad Dum and forge an alliance.

Building up your income

Depending on how other factions are progressing, you will probably have the whole west border secured with Enedwaith, Dunedain and High Elves at your borders (forge alliances if possible). This is still not big enough region for sufficient income so you can skip over High Elves and continue your conquest on Remnants of Angmar. Another possibility is to take Dol Guldur which is relatively easy to take but has a slightly negative position. First, if you retake any settlement which belonged to Woodland Realm you should gift it back or they might think about reclaiming it by declaring war on you. The second negative is that you get a likely border with Easterlings. If you take Dol Guldur, take the 2-3 cities inside the safety of forests and do not expand further east, except maybe for one single stronghold (with ballista towers). Having a major war with Easterlings is the absolute number one mistake as it drains a lot of your resources and you will make almost no advances into their vast no-choke point territories.

To War!!!

This far into the game Gondor can start having trouble defending itself so with sufficient income from your expansion you should be able to finance war against Mordor. You still have expansion options on the north to Snow Orcs or Ered Luin (which choose the dark side in an early Blue Wizard event) but if Gondor is getting in trouble you should not wait any longer.

I personally was a bit late and Minas Tirith already fell so I had to re-conquer it which is an unnecessary delay.

You won't be able to resist the urge to recreate the scene

War against Mordor is pretty much the same with whatever faction you are playing, except maybe Dale and Dwarves of Erebor (which can backdoor into Mordor from the east). It is all about creating choke points and progressing eastwards settlement by settlement.

I happened to get some Gondor AI reinforcements which felt nice. Allied forces will join you if they are nearby or call you to help mid-turn

Let's focus on the campaign map around Minas Tirith for a bit.

It is all about the choke points

As you arrive to the east, you should take Galebrin (not seen on the picture, region north of Cair Andros) which is a small useless village in the The Dead Marshes and give it to an ally or a neutral faction as a gift. This will block Easterlings or Mordor advancing on your native land across Anduin which is a total waste of time and a distraction so it is an utmost priority to seal that off.

You should also take Calenhad because you want as many cities as possible on the east to develop your armies closer to the war front. Training armies in Rohan and moving them to Gondor takes many turns and is inefficient. If Calenhad is still owned by Gondor when you arrive you can easily swap it with any region you recapture or just buy it off if finance allows it.

Next one is Cair Andros which you also do not really want to own because it creates a secondary target to Osgiliath. Once you take whole Osgiliath, Cair Andros can probably be owned by Gondor since Mordor will have full focus on you and not even attack there. If it is still attacked and falls when owned by Gondor, gift it to a neutral party.

Leave Minas Tirith to Gondor. It does not make much sense tactically but if you are into lore it makes sense to do that. It also leaves them some good income to fight their own wars.

Osgiliath is your primary choke point for quite a lot of turns but is relatively easy to defend because the battle map is hard to navigate and has narrow paths.

A single bridge leads to East Osgiliath

This setup makes Osgiliath the only route you need to defend. Once Osgiliath gets army production, you can continue on Minas Morgul. You continue this step by step progression, waiting for conquered cities to be able to produce army and then take the next one.

How insanely good modding is that?? Siege of Minas Morgul
Glitching up the mountain for an epic view of Minas Morgul

At some point after Minas Morgul and Cirith Ungol you can capture the two cities to the south for extra production. You can gift the settlement below Ostithil to Gondor or neutral party to avoid a second front with Haradrim.

When Gondor is safe from Mordor they will start thriving and go into big expansion into Harad and Rhun. You don't really need to worry about those two factions unless you arrive late and they already progress deep into south Gondor regions.

With Rohan taking the brunt of the Mordor forces, Gondor is free to take care of their southern borders and go on a war path

Once you have Minas Morgul (and Cirith Ungol) you might feel a bit exposed because you will also get attacked from the Black gate, usually on Osgiliath. You can take a bit of a rest by gifting Cirith Ungol to a neutral party but that might not last long since Mordor tends to attack them anyway and this can last just a few turns. Depending on diplomatic relations you could buy yourself quite a bit of time like this though.

The question of taking Barad Dur is just a matter of time then. Being this close has an annoying effect that each defeated Nazgul commander respawns in Barad Dur and is ready for another siege in a very short time, so you are almost constantly defending against 10 star commanders. You should defend with equally strong commanders to avoid any unnecessary fleeing troops.

You may also encounter Sauron as a commander. As a unit almost impossible to kill, unless you get a catapult kill, but defeating his army will send him fleeing and weaken the faction.

Sauron Encounter

Once you start taking cities inside Mordor it gets increasingly easier to finish it off.

Final Assault
Siege of Barad Dur, get those catapults!

Once a faction is on their last stand, the capital will be filled up with army and can be pretty hard to conquer, especially if it is not an open type terrain like dwarven caves. Barad Dur is basically one small tunnel with only two access points. Bringing catapults to fire directly into the tunnel slope is a must or you will simply run out of time even with enough units in battle. You simply can't kill 3700+ units, a lot of them of heavy type, encamped in such a small area before the clock runs out.

Incredible views

What could go wrong?

Gifting regions to neutral parties works for some time but eventually everyone is at war with Mordor. These can buy you a lot of turns of peace or Mordor might just attack them immediatelly. Depending on your luck, you could start getting attacked by Mordor or Easterlings through Dead Marshes on your core territory where you have your main production and mostly undefended cities.

Another possible drain is attacking Snow Orcs or Ered Luin when you are already neck deep into eastern war. Your northern production takes quite some time to get up to speed and these two factions are holed up in dwarven caves which are hard to capture.

Beautiful dwarven battle maps

You essentially need full armies and catapults to have any chances to break through heavy units in time.

These are a PITA to capture

Rohan in itself is not a particularly powerful faction when it comes to units. You have almost no heavy infantry except Guards of Meduseld which can only be produced in Edoras. Your heavy cavalry is good but not much better than any other heavy cavalry and your archers are poor. In these cave scenarios, you have no advantages. The whole unit production of DAC is very slow, you need many fully upgraded settlements just to be able to retrain damaged armies every turn. A lot of times you won't be able to retrain due to lack of unit availability and you need to send units to far away settlements.

Gondor inflicting maximum pain

Finally, the biggest problem is how to avoid a major war with Easterlings. If you expand into Dol Guldur you get a border sooner or later (Dale tends to be pushed back). You can usually defend there with fully upgraded walls (ballista towers). You can potentially conquer one of their strongholds nearby and have it as a gravity point. Ballista towers can generally clear their armies a lot before they even break through the walls so defending is quite easy but it is problematic to replenish the defending army in the period when settlement can not yet produce units. You should absolutely not push deeper into their territory while you are fighting Mordor, they will push you back and probably crush you. By staying in close proximity you drain a bit of their resources and help Dale not getting eradicated this way.

Easterlings are basically an overpowered version of Rohan. Their cavalry is on par, archers are better and they have more heavy units so you have no particular advantages over them in battles.

After end

You can have some more fun after bringing down Barad Dur. I personally finished up on the east and then defeated Ered Luin for a good measure.

Ered Luin Resolve

Weird things can also happen with other factions. In my case, Dwarves of Khazad Dum went on a war path and completely eradicated Lothlorien and the Vale, leaving dozens of armies on my borders. If you don't watch your relationships (like trespassing without Military Access) you get yourself an enemy you can't handle.

Dwarves overpowered Lothlorien and Vale of Anduin, leaving dozens of armies

Game event dialogs

Throughout the campaign you get events and commander traits as you conquer important settlements. Your commander will receive a title of the last big conquest or generic title such as Aggressive or Defender, depending if they are conquering regions or defending a lot. This attention to detail is incredible.

Theodred conquers Dol Guldur

These events with very nice descriptions make the campaign feel alive.

Morannon broken

As factions are near destruction or completely destroyed you also get these.

For Frodo!!!
Ered Luin Dwindles

Trait Descriptions

Trait descriptions are LOTR based and absolutely brilliant. Some samples below.

Eomer Command
Glorfindel Renown
Eomer Loyalty
Theoden Acumen


Soundtrack is absolutely brilliant. It includes LOTR movies soundtrack, The Hobbit trilogy soundtrack, LOTR Online, Battle of Middle Earth and many more for different factions. The best of all, almost all of the picked themes work nicely with the battle scenarios and campaign map, as if they were custom made just for this mod and they really give you that Middle Earth feeling while playing.

Someone made a playlist on YouTube.


All we have to decide is what to do with the time that is given us..

DAC is one of the best mods I played in my life and is everything a Total War and a LOTR fan could wish for. The soundtrack, attention to detail, city models and the campaign maps, everything just screams of incredible effort to make this mod as good as it is. If you haven't yet, give it a try.


Building Qt 5.15 on Windows with OpenSSL

I have written about the many problems of building Qt 5 with OpenSSL in the past. Several years later, it is time to upgrade to latest Qt 5.15 which is presumably the last in the Qt 5 series. This time I decided to drop the Windows XP support since it is just too much work to get working and XP market share is much lower today than it was 5 years ago.

Since the Qt build documentation is still lacking here is the latest text of the ordeal.

First thing first, install Strawberry Perl and Python 2 (yeah.. really).

For speedier builds, also install jom. We will also need vcpkg to get the OpenSSL binaries. Needless to say, all these tools need to be in your system PATH.

Now get the code:

git clone git://code.qt.io/qt/qt5.git
cd qt5
git checkout v5.15
perl init-repository

I think you can tell the init-repository script to not download some of the modules you don't need (it pulls 12GB of data!) but I couldn't be bothered to find the flags for that. You can probably avoid it by downloading the source archive instead of doing it via git.

Today we finally have the C/C++ package managers available to not bother bulding the dependencies anymore. Vcpkg and conan are both great tools that do the job. So forget about building OpenSSL, just install the binaries with vcpkg:

./vcpkg.exe install openssl

In your user env, add

OPENSSL_LIBS=-llibssl -llibcrypto

Now we can run the configure script inside the qt5 folder. First we do a release build and we link to openssl (run in Visual Studio cmd):

.\configure.bat -v -release -opensource -nomake examples -opengl desktop -platform win32-msvc2015 -openssl -openssl-linked -I C:/Users/me/git/vcpkg/installed/x86-windows/include -L C:/Users/me/git/vcpkg/installed/x86-windows/lib

Change the location of OpenSSL include and lib folders to whatever your vcpkg installation directory is. I am still targeting msvc2015 for now but I plan to transition to 2019 eventually.

If you change around the configure parameters, make sure to delete config.cache file since in my experience it likes to save unwanted information from previous runs.

Build it with jom:

jom install

Now you have a release build and you can add it in Qt Creator under Tools->Options->Qt Versions by giving it the path to C:\Qt\Qt-5.15.1\bin\qmake.exe.

If you also need debug build of Qt, you can repeat the configure and build step by replacing -release flag with -debug (remember to delete config.cache first).

An there you have it.. Qt built from source with OpenSSL support. Once you build your program you will also have to copy all the relavant .dll files into the .exe build directory (Qt5Core.dll, Qt5Gui.dll, Qt5Network.dll…). The amount of libraries you need to copy depends on what you are actually using in your code. You will also need to copy libcrypto-1_1.dll and libssl-1_1.dll from the vcpkg install directory.

For debug builds you need to copy the debug libraries (Qt5Cored.dll) instead.

The future appears to be even brighter now that bincrafters have packaged Qt as a conan recipe. Which means the next time I need to depend on Qt, I will run a single conan command and get the proper build automagically delivered to my PC along with all the transitive dependencies. The future is now.




An OpenSprinkler success story

I wanted to automate the watering system at home preferably using open-source and DIY systems. The initial plan was to go with plain RPi, OpenHAB and some GPIO code driving the sprinkler valves but the problem was creating a useful UI to control the system since OpenHAB is too clunky and generic looking. I was also not quite ready diving deep into embedded programming and OpenHAB programming model. OpenSprinkler  seemed to have everything I needed, a RPi hat with all the correct electrical outputs and an open source firmware and android app I could modify myself if needed. In the end, programming the sequences myself and trying to make a decent UI would be just too much work for a small pet project so I went with a ready solution.

The requirements

  1. Three separate zones around the house, max 7 sprinklers per zone.
  2. Each zone must be turned on separately due to the pressure requirement for the sprinklers to work.
  3. Pump that drives the water must be turned on automatically with each zone valve.

Setting up the OSPi

OpenSprinkler offers fully assembled systems but I decided to go the DYI route using my own RPi and just buying the OSPi hat.

  • RPi 4 Model B 2GB
  • RPi official charger
  • OSPi (VAC)
  • 32GB SD card
  • Orbit 57056 2-Pin European Transformer

Finding a 24VAC power supply with EU plug was quite a challenge, the listed model from Orbit was one of the rare ones I could find online (on Amazon).

It is mentioned in OpenSprinkler documentation that a separate power supply for RPi is recommended. This was confirmed while testing where I saw dmesg errors about voltage not being sufficient and RPi rebooting endlessly. I ended up using the official RPi charger and the 24VAC charger at the same time.

Installing raspbian and OSPi firmware was easy with no problems encountered. Assembling the OSPi was also not problematic, other than drilling some holes into the supplied enclosure for the USB cable and WiFi adapter.

The WiFi

The built in WiFi on RPi would not work even half of the required distance and was simply horrendous. Onboard WiFi can be disabled by modifying /boot/config.txt and adding

in the [all] section.

After checking compatibility lists and reviews for RPi compatible USB WiFi adapters I went with Edimax EW-7811UN. I disabled the integrated card and configured the /boot/wpa_supplicant.conf to connect to the dedicated WiFi extender AP as a priority.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev



In the end I managed to achieve a not so great but stable signal from the house to controller box at around -60db. For the WiFi extender I went super cheap using TP-LINK TL-WR840N (15EUR) and positioning it with no walls blocking the signal other than a single garage door. I also added a small script to sudo cron to automatically restart RPi in case of any network downtime.

ping -c4 > /dev/null

if [ $? != 0 ]
  sudo /sbin/shutdown -r now
  echo $(date) "Internet is UP"

Driving the Pump

Looking for a relay to turn the pump on and off I decided to go with an off delay relay as an extra safety that will automatically turn off after the selected period of time. This is just in case OSPi goes haywire and does not turn off as scheduled or someone makes a mistake of turning the sprinklers on for too long. Pump draining all the water and running dry is a very bad scenario I would like to avoid. The model is a Tracon multifunction relay AC/DC 12-240V driven by 24VAC OSPi.

Putting it together

Relay is connected to OSPi port 0 (master zone) which is always turned on with either valve 1, 2 or 3. Relay drives the first power socket for the pump. The other two sockets are for Orbit 24VAC and RPi charger. This way the pump can be disconnected at any time and used manually.

The valves

24VAC valves are quite common. I found three candidates from Orbit, Rainbird and Cleber. In the end it came down to price and availability, so I went with 3x Rainbird CP075 off eBay, roughly 30$ each.

Finally, to connect the valves to OSPi I got some 4×0.75mm cable and some waterproof clips to connect them on the valve side. These are automatic clips put in a box full of gel which seals it when closed.

Operation and conclusion

It turns out the OSPi firmware and app has the exact functions I need to drive the setup. Master zone translates perfectly into the pump relay. For each valve, the "continuous" setting (which is default) allows you to setup a single schedule program and OSPi will automatically drive each valve one after another and not all at once (which would not work due to low pressure). Without the continuous setting one would have to write a separate program for each valve which is a bit clunky.

One thing that does not work quite as good is automatic rain delay. The idea is, if sprinklers are scheduled to work today but there is a rain forecast, delay the program for some time, like a day. Unfortunately, if it does not rain at all, delay is still present. It would appear that OSPi only checks the forecasts but does not adjust the delay according to actual mm of rain that has fallen. I need to research this function in more depth to figure out the exact behavior and whether I can improve upon it.

Another glitch that appears once a month or so is that OSPi is randomly not accessible. This is fixed with the main router reboot. I am not sure yet what exactly causes the problem, whether the auto-reboot script works.. more investigation is needed. It probably boils down to not so great WiFi connection.

In the end I am quite happy I went with the OpenSprinkler and not a full DIY solution. It saved time, does everything I require and I am able to modify it if ever needed.


2023 update

After the system being dormant through winter of 2022/2023, the RPi would no longer boot. The Sandisk Ultra SD card seems to have got corrupted for some reason so I had to re-image and reconfigure OSPI again. I replaced the card with a Sandisk Industrial series card which are supposedly more tolerant to heat and cold. We'll see how long this one lasts.

Other than that the system is still working great.


Debugging Laravel in Eclipse PDT

I don't use PHP enough to justify buying a PHPStorm license so I am using Eclipse PDT instead. I am a bit rusty with Eclipse and PHP so I couldn't really find anything on Google about debugging Laravel projects in Eclipse. Finally figured it out, here is how.

Examples are done on Eclipse IDE Version: 2019-12 (4.14.0).

First, configure XDebug with Eclipse. On Fedora you can install it via

sudo dnf install php-xdebug

Check that XDebug remote is enabled with phpinfo() test site, if not add the following line to your php.ini:

xdebug.remote_enable = 1

Now in Eclipse, we first add a server. In Window->Preferences->PHP->Servers add a new server like this:

Document root is our Laravel public folder and base URL is the default host and port of

php artisan serve

Now check your Debug settings in PHP->Debug, select the newly created server and check that XDebug is set as the debugger:

If XDebug is not present here, configure it under PHP->Debug->Debuggers first.

Finally, under General->Web Browser, we select an external web browser to launch our website instead of integrated Eclipse Browser.

We are done with Preferences so close it. Next to the Debug button in main Eclipse toolbar, click on the arrow for the dropdown and select Debug configurations…

Create a new PHP Web Application config like this.

We point the file to public index and map it to root URL (default by artisan serve). Under Debugger tab check that XDebug is selected.

Now go to the terminal and serve your laravel app as you would with

php artisan serve

Finally, run the "web" Debug configuration from Eclipse. Eclipse should go into the Debug mode and open up your site in your selected browser. You can now place your breakpoints in controllers or wherever and things just work like you would expect.


Apache http to https redirect – use 307

Who knew that a simple thing like HTTP redirects would be so complicated? It turns out clients will just change POST to GET on 301 (Postman, curl, everyone?), same with 302 which really behaves like 303 and that is also an old implementation "bug". Yeah, seriously.

If you have a REST API with POST (or other non-GET) request endpoints (who doesn't?) this behaviour will completely destroy everything.  Many guides (top google results) out there for configuring Apache redirect do not mention this problem. The code of choice would be 308 Permanent Redirect but that is fairly new so I would not risk it, older clients don't know it exists. The only thing left is 307 which does not allow changing methods on redirect – exactly how it should be.


<VirtualHost *:80>
    ServerName example.com
    Redirect 307 / https://example.com/



Setting env variables with hyphen and running a program

Docker compose allows you very unrestrictive naming of your environment variables. It allows you to use hyphen and other "special" characters in variables names. When you need to use these variables in regular shell you are out of luck, bash and many other shells do not allow hyphens in variable names. But this is merely a shell restriction, so how to do it?

With env

env -i 'TZ=Europe/Berlin' \
'PORT=8080' \
'BASE-URL=http://localhost:8080' \
'DB[0]_CONNECTION-URL=jdbc:postgresql://localhost:5432/postgres' \
'DB[0]_USERNAME=username' \
'DB[0]_PASSWORD=password' java -jar myapp.jar

Note that env ignores all inherited env variables so you might need to redefine them:

'TZ=Europe/Berlin' \ 
'PORT=8080' java -jar myapp.jar



Obscure IntelliJ IDEA "bug" with maven jdk profile activation "not working"

Since Java 9 it is popular to activate additional dependencies which were removed from the core JDK through maven profile.


Using Java 11 , jaxb-api would correctly show in maven dependency tree and Docker packaged application would work correctly with the dependency jar in the classpath.

However, when running the app from IntelliJ it would fall apart with

Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlRootElement

Opening module dependencies in IDE would show that jaxb-api is not on the list of dependencies. IntelliJ is therefore not activating the maven profile correctly even though:

  • maven compiler release is set to 11
  • project and Module SDK is set to Java 11
  • app is run with Java 11

Why is that? There is this snippet in IntelliJ Maven Profiles documentation:

If you use a profile activation with the JDK condition (JDK tags in the POM: <jdk></jdk>), IntelliJ IDEA will use the JDK version of the Maven importer instead of the project's JDK version when syncing the project and resolving dependencies. Also, if you use https certificates, you need to include them manually for the Maven importer as well as for the Maven runner.

Why IntelliJ developers decided to tie the maven profile activation to importer I do not know. It would make much more sense to tie it to Project/Module SDK. If app is being developed with Java 11 target one would expect to activate that profile at build and runtime, not at import time.

With more digging around I managed to find an issue complaining about this problem. Unfortunately the issue is 4 years old now with no apparent activity. Preferrably the default should be changed, if not at least give us an option to choose the source of profile activation in preferences.




Receive only the data your client needs – full dynamic JSON filtering with Jackson

A lot of times JSON returned by your REST API grows to incredibly big structures and data sizes due to business logic complexity that is added over time. Then there are API methods returning a list of objects which can be huge in size. If you serve multiple clients, each one can have different demands on what is and is not needed from that data so backend can't decide on it's own what to prune and what to keep. Ideally, backend would always return full JSON by default but allow clients to specify exactly what they want and have backend adjust the response accordingly.  We can achieve this using the power of Jackson library.

– allow REST API clients to decide on their own which parts of JSON to receive (full JSON filtering)

Resources for this tutorial:
– Microprofile or JakartaEE platform (JAX-RS)
– Jackson library
– Java classes (lib) representing your API responses which are serialized to JSON
– some custom code to bring things together

The lib module

First lets define a few classes which represent our JSON responses.

public class Car {

  private Engine engine;

  private List<Wheel> wheels;

  private String brand;

 //Getters and setters..

public class Wheel {

  private BigDecimal pressure;

  //Getters and setters..

public class Engine {
  private int numOfCylinders;

  private int hp;

  //Getters and setters..

Our lib serialized to JSON would look something like this:

    "engine": {
        "numOfCylinders": 4,
        "hp": 180
    "wheels": [
            "pressure": 30.2
            "pressure": 30.1
            "pressure": 30.0
            "pressure": 30.3
    "brand": "Jugular"

Let's say one of our clients only needs the engine horse power and brand information. We want to be able to specify a query parameter like filter=car:engine,brand;engine:hp and receive the following:

    "engine": {
        "hp": 180
    "brand": "Jugular"

Step in Jackson

Jackson provides an annotation for such tasks called @JsonFilter. This annotation expects a filter name as a parameter and a named filter must be applied to serialization mapper, for example:

FilterProvider filters = new SimpleFilterProvider()
.addFilter("carFilter", SimpleBeanPropertyFilter.filterOutAllExcept("wheels"));      
String jsonString = mapper.writer(filters)...

As you can see, all we need is already there but is a rather static affair. We need to take this and make it fully dynamic and client driven.

The reason filter needs a name is because each one is bound to a class and attribute filtering is done on that class. What we need to do is transform car:engine,brand into a carFilter and SimpleBeanPropertyFilter.filterOutAllExcept("engine", "brand").

For starters, lets add the filters to our classes:

public class Car {}

public class Engine {}

public class Wheel {}

There is one thing about this that bothers me.. the filter name is a static String so it is refactor unfriendly if class name changes some day. Couldn't we just name the filters by taking a look at the name of the underlying class? Yes we can, by extending Jackson introspection:

public class MyJacksonAnnotationIntrospector extends JacksonAnnotationIntrospector {

    public Object findFilterId(Annotated a) {
        JsonFilter ann = _findAnnotation(a, JsonFilter.class);
        if (ann != null) {
            String id = ann.value();
            if (id.length() > 0) {
                return id;
            else {
                try {
                    //Use className+Filter as filter ID if ID is not set, e.g. Car -> carFilter
                    Class<?> clazz = Class.forName(a.getName());
                    return StringUtils.uncapitalize(clazz.getSimpleName())+"Filter";
                } catch (ClassNotFoundException e) {
        return null;

With this, any class annotated with @JsonFilter("") will automatically get a filter called classNameFilter. We no longer need to specify filter names and keep them in sync with class names.

Our lib now looks like:

public class Car {}

public class Engine {}

public class Wheel {}

Next step is to transform and apply the query parameters into our filter structure.

First, register a Jackson provider for JAX-RS server:

public class JacksonProvider extends JacksonJsonProvider implements ContextResolver<ObjectMapper> {

    private final ObjectMapper mapper;

    public JacksonProvider() {
        mapper = new ObjectMapper();
        mapper.registerModule(new JavaTimeModule());
        mapper.setFilterProvider(new SimpleFilterProvider().setFailOnUnknownId(false));
        mapper.setAnnotationIntrospector(new MyJacksonAnnotationIntrospector());

    public ObjectMapper getContext(Class<?> type) {
        return mapper;

We register our own introspector and disable failures on unknown filters (in case client filters by something nonexisting).

Provider must be registered in your rest Application.

public class MyApplication extends Application {

    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();


        return classes;

Finally, we implement our own MessageBodyWriter to override the default serialization and apply the filters dynamically.

public class JsonFilterProvider implements MessageBodyWriter<Object> {

    private UriInfo uriInfo;

    private JacksonProvider jsonProvider;

    public static final String PARAM_NAME = "filter";

    public boolean isWriteable(Class<?> aClass, Type type, Annotation[] annotations, MediaType mediaType) {
        return MediaType.APPLICATION_JSON_TYPE.equals(mediaType);

    public long getSize(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType) {
        return -1;

    public void writeTo(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType, MultivaluedMap<String, Object> stringObjectMultivaluedMap,
                        OutputStream outputStream) throws IOException, WebApplicationException {

        String queryParamValue = uriInfo.getQueryParameters().getFirst(PARAM_NAME);
        if (queryParamValue!=null && !queryParamValue.equals("")) {

            SimpleFilterProvider sfp = new SimpleFilterProvider().setFailOnUnknownId(false);

            //We link @JsonFilter annotation with dynamic property filter
            for (Map.Entry<String, Set<String>> entry : getFilterLogic(queryParamValue).entrySet()) {
                sfp.addFilter(entry.getKey() + "Filter", SimpleBeanPropertyFilter.filterOutAllExcept(entry.getValue()));

            jsonProvider.locateMapper(aClass, mediaType).writer(sfp).writeValue(outputStream, object);
        else {
            jsonProvider.locateMapper(aClass, mediaType).writeValue(outputStream, object);

    //Map of object names and set of fields
    private Map<String, Set<String>> getFilterLogic(String paramValue) {
        // ?jsonFilter=car:engine,brand;engine:numOfCylinders
        String[] filters = paramValue.split(";");

        Map<String, Set<String>> filterAndFields = new HashMap<>();

        for (String filterInstance : filters) {
            List<String> pair = Arrays.asList(filterInstance.split(":"));
            if (pair.size()!=2) {
                throw new RuntimeException();

            Set<String> fields = new HashSet<>(Arrays.asList(pair.get(1).split(",")));
            filterAndFields.put(pair.get(0), fields);

        return filterAndFields;

getFilterLogic method assembles the query parameter structure into a map of <String className, Set<String> fields> which is then applied as a Jackson filter.

Finally, we need to register our JsonFilterProvider in our Application as we did with JacksonProvider.

public class MyApplication extends Application {

    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();


        return classes;

One small deficiency with this solution is that once you specify a class with fields to filter, it will be filtered wherever in the nested JSON structure it appears, you can't just filter a specific class at a specific level. Realistically, I think this is a rather minor problem compared to the benefits and the simplicity of the implementation.

Finally a question on documentation. How do you tell the client developer about all the possible filter object names and their attributes? If you use OpenAPI you are 95% there. Simply document that you can filter by model name followed by attribute name. Client developer can easily figure out the names from your OpenAPI specification. The only remaining problem is when you don't want to allow filtering on all classes. In this case my approach would be to document a filterable class in OpenAPI description:

@ApiModel(description = "[Filterable] A car.")

This manual approach of documenting goes against the rest of the paradigm so a real purist would write an OpenAPI extension that would introspect all @JsonFilter annotations and modify the descriptions automatically. But let's leave that for a future blog post.


A similar, more advanced and out-of-the-box solution is squiggly, which also uses Jackson under the hood.



Updating server from Debian Stretch to Buster

Not the most pleasant experience.. I expected a smoother upgrade from Debian team. Upgrading from 8 to 9 was a walk in the park compared to this.

1. MySQL silently fails to start after upgrade

MySQL was left behind at version 5.5 after upgrade and would just not start anymore, probably segfaulting. There is no mysql-server package anymore so I had really no other option but to remove it and install mariadb. In addition, I had trouble running mariadb due to requirement to run mysql_upgrade .. but I couldn't run that because I had no working instance of mysql server running! Installing package default-mysql-server instead somehow solved the problem.

2. phpMyAdmin removed from packages

Not sure how maintaining phpMyAdmin is such a big task that the package was dropped from repos. Regular setup is simply unzipping the code and add an apache config.

3. docker fails due to nftables switch

Docker is such a big and important package these days… and breaks due to iptables no longer being the default. I would expect the upgrade process to not do the switch in this case.

4. apt autoremove anomaly

For some reason running autoremove wanted to purge essential packages such as php, gcc and python3. I did not pay too much attention but alerts started going off when ifup was getting removed and my ssh connection was lost. ?????????? (10 question marks)


Luckily this was all of the troubles, dealing with broken wordpress plugins was a relaxing task afterwards.


HTPP Accept-Language request header to ResourceBundle

HTTP Accept-Language header is specified by the client to inform the backend what the preferred language for the response is. In Java, the go-to utility for handling localization is ResourceBundle.

What is missing is a standard way to properly convert the input header to the correct ResourceBundle. Specifically,

ResourceBundle i18n = ResourceBundle.getBundle("bundles/translations", request.getLocale());

is insufficient. HttpServletRequest::getLocale() method returns the top preferred locale but if no such ResourceBundle exists, it will fall back to default locale instead of going down the priority list. For example, this header:

Accept-Language: de-DE;q=1.0,fr-FR;q=0.9,en-GB;q=0.8

when backend is missing de-DE translations will return the system default (e.g. en-GB) instead of fr-FR which is the second by priority.

Clients don't usually request languages unknown to backend but it is possible in theory and languages can be automatically added by the client platform (iOS does this) without the client knowing.

We need to iterate the locale chain and find the highest match that exists as a bundle.

Below is a sample in JAX-RS environment.
public class Localization {

    private HttpServletRequest request;

    private ResourceBundle i18n;

    void postConstruct() {
        //List of locales from Accept-Language header
        List<Locale> locales = Collections.list(request.getLocales());

        if (locales.isEmpty()) {
            //Fall back to default locale

        for (Locale locale : locales) {
            try {
                i18n = ResourceBundle.getBundle("bundles/translations", locale);
                if (!languageEquals(i18n.getLocale(), locale)) {
                    //Default fallback detected
                    //The resource bundle that was returned has different language than the one requested, continue
                    //Only language tag is checked, no support for detecting different regions in this sample
            catch (MissingResourceException ignore) {

    private boolean languageEquals(Locale first, Locale second) {
        return getISO2Language(first).equalsIgnoreCase(getISO2Language(second));

    private String languageGetISO2(Locale locale) {
        String[] localeStrings = (locale.getLanguage().split("[-_]+"));
        return localeStrings[0];

    public ResourceBundle i18n() {
        return this.i18n;