All posts by cen

Building Qt 5.15 on Windows with OpenSSL

I have written about the many problems of building Qt 5 with OpenSSL in the past. Several years later, it is time to upgrade to latest Qt 5.15 which is presumably the last in the Qt 5 series. This time I decided to drop the Windows XP support since it is just too much work to get working and XP market share is much lower today than it was 5 years ago.

Since the Qt build documentation is still lacking here is the latest text of the ordeal.

First thing first, install Strawberry Perl and Python 2 (yeah.. really).

For speedier builds, also install jom. We will also need vcpkg to get the OpenSSL binaries. Needless to say, all these tools need to be in your system PATH.

Now get the code:

git clone git://code.qt.io/qt/qt5.git
cd qt5
git checkout v5.15
perl init-repository

I think you can tell the init-repository script to not download some of the modules you don't need (it pulls 12GB of data!) but I couldn't be bothered to find the flags for that. You can probably avoid it by downloading the source archive instead of doing it via git.

Today we finally have the C/C++ package managers available to not bother bulding the dependencies anymore. Vcpkg and conan are both great tools that do the job. So forget about building OpenSSL, just install the binaries with vcpkg:

./vcpkg.exe install openssl

In your user env, add

OPENSSL_LIBS=-llibssl -llibcrypto

Now we can run the configure script inside the qt5 folder. First we do a release build and we link to openssl (run in Visual Studio cmd):

.\configure.bat -v -release -opensource -nomake examples -opengl desktop -platform win32-msvc2015 -openssl -openssl-linked -I C:/Users/me/git/vcpkg/installed/x86-windows/include -L C:/Users/me/git/vcpkg/installed/x86-windows/lib

Change the location of OpenSSL include and lib folders to whatever your vcpkg installation directory is. I am still targeting msvc2015 for now but I plan to transition to 2019 eventually.

If you change around the configure parameters, make sure to delete config.cache file since in my experience it likes to save unwanted information from previous runs.

Build it with jom:

jom
jom install

Now you have a release build and you can add it in Qt Creator under Tools->Options->Qt Versions by giving it the path to C:\Qt\Qt-5.15.1\bin\qmake.exe.

If you also need debug build of Qt, you can repeat the configure and build step by replacing -release flag with -debug (remember to delete config.cache first).

An there you have it.. Qt built from source with OpenSSL support. Once you build your program you will also have to copy all the relavant .dll files into the .exe build directory (Qt5Core.dll, Qt5Gui.dll, Qt5Network.dll…). The amount of libraries you need to copy depends on what you are actually using in your code. You will also need to copy libcrypto-1_1.dll and libssl-1_1.dll from the vcpkg install directory.

For debug builds you need to copy the debug libraries (Qt5Cored.dll) instead.

The future appears to be even brighter now that bincrafters have packaged Qt as a conan recipe. Which means the next time I need to depend on Qt, I will run a single conan command and get the proper build automagically delivered to my PC along with all the transitive dependencies. The future is now.

 

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

An OpenSprinkler success story

I wanted to automate the watering system at home preferably using open-source and DIY systems. The initial plan was to go with plain RPi, OpenHAB and some GPIO code driving the sprinkler valves but the problem was creating a useful UI to control the system since OpenHAB is too clunky and generic looking. I was also not quite ready diving deep into embedded programming and OpenHAB programming model. OpenSprinkler  seemed to have everything I needed, a RPi hat with all the correct electrical outputs and an open source firmware and android app I could modify myself if needed. In the end, programming the sequences myself and trying to make a decent UI would be just too much work for a small pet project so I went with a ready solution.

The requirements

  1. Three separate zones around the house, max 7 sprinklers per zone.
  2. Each zone must be turned on separately due to the pressure requirement for the sprinklers to work.
  3. Pump that drives the water must be turned on automatically with each zone valve.

Setting up the OSPi

OpenSprinkler offers fully assembled systems but I decided to go the DYI route using my own RPi and just buying the OSPi hat.

  • RPi 4 Model B 2GB
  • RPi official charger
  • OSPi (VAC)
  • 32GB SD card
  • Orbit 57056 2-Pin European Transformer

Finding a 24VAC power supply with EU plug was quite a challenge, the listed model from Orbit was one of the rare ones I could find online (on Amazon).

It is mentioned in OpenSprinkler documentation that a separate power supply for RPi is recommended. This was confirmed while testing where I saw dmesg errors about voltage not being sufficient and RPi rebooting endlessly. I ended up using the official RPi charger and the 24VAC charger at the same time.

Installing raspbian and OSPi firmware was easy with no problems encountered. Assembling the OSPi was also not problematic, other than drilling some holes into the supplied enclosure for the USB cable and WiFi adapter.

The WiFi

The built in WiFi on RPi would not work even half of the required distance and was simply horrendous. Onboard WiFi can be disabled by modifying /boot/config.txt and adding

dtoverlay=disable-wifi
in the [all] section.

After checking compatibility lists and reviews for RPi compatible USB WiFi adapters I went with Edimax EW-7811UN. I disabled the integrated card and configured the /boot/wpa_supplicant.conf to connect to the dedicated WiFi extender AP as a priority.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
 ssid="Home_Ext"
 psk="pass"
 id_str="ext1"
 priority=5
}


network={
 ssid="Home"
 psk="pass"
 id_str="main"
 priority=10
}

In the end I managed to achieve a not so great but stable signal from the house to controller box at around -60db. For the WiFi extender I went super cheap using TP-LINK TL-WR840N (15EUR) and positioning it with no walls blocking the signal other than a single garage door. I also added a small script to sudo cron to automatically restart RPi in case of any network downtime.

ping -c4 192.168.1.1 > /dev/null

if [ $? != 0 ]
then
  sudo /sbin/shutdown -r now
else
  echo $(date) "Internet is UP"
fi

Driving the Pump

Looking for a relay to turn the pump on and off I decided to go with an off delay relay as an extra safety that will automatically turn off after the selected period of time. This is just in case OSPi goes haywire and does not turn off as scheduled or someone makes a mistake of turning the sprinklers on for too long. Pump draining all the water and running dry is a very bad scenario I would like to avoid. The model is a Tracon multifunction relay AC/DC 12-240V driven by 24VAC OSPi.

Putting it together

Relay is connected to OSPi port 0 (master zone) which is always turned on with either valve 1, 2 or 3. Relay drives the first power socket for the pump. The other two sockets are for Orbit 24VAC and RPi charger. This way the pump can be disconnected at any time and used manually.

The valves

24VAC valves are quite common. I found three candidates from Orbit, Rainbird and Cleber. In the end it came down to price and availability, so I went with 3x Rainbird CP075 off eBay, roughly 30$ each.

Finally, to connect the valves to OSPi I got some 4×0.75mm cable and some waterproof clips to connect them on the valve side. These are automatic clips put in a box full of gel which seals it when closed.

Operation and conclusion

It turns out the OSPi firmware and app has the exact functions I need to drive the setup. Master zone translates perfectly into the pump relay. For each valve, the "continuous" setting (which is default) allows you to setup a single schedule program and OSPi will automatically drive each valve one after another and not all at once (which would not work due to low pressure). Without the continuous setting one would have to write a separate program for each valve which is a bit clunky.

One thing that does not work quite as good is automatic rain delay. The idea is, if sprinklers are scheduled to work today but there is a rain forecast, delay the program for some time, like a day. Unfortunately, if it does not rain at all, delay is still present. It would appear that OSPi only checks the forecasts but does not adjust the delay according to actual mm of rain that has fallen. I need to research this function in more depth to figure out the exact behavior and whether I can improve upon it.

Another glitch that appears once a month or so is that OSPi is randomly not accessible. This is fixed with the main router reboot. I am not sure yet what exactly causes the problem, whether the auto-reboot script works.. more investigation is needed. It probably boils down to not so great WiFi connection.

In the end I am quite happy I went with the OpenSprinkler and not a full DIY solution. It saved time, does everything I require and I am able to modify it if ever needed.

 

2023 update

After the system being dormant through winter of 2022/2023, the RPi would no longer boot. The Sandisk Ultra SD card seems to have got corrupted for some reason so I had to re-image and reconfigure OSPI again. I replaced the card with a Sandisk Industrial series card which are supposedly more tolerant to heat and cold. We'll see how long this one lasts.

Other than that the system is still working great.

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Debugging Laravel in Eclipse PDT

I don't use PHP enough to justify buying a PHPStorm license so I am using Eclipse PDT instead. I am a bit rusty with Eclipse and PHP so I couldn't really find anything on Google about debugging Laravel projects in Eclipse. Finally figured it out, here is how.

Examples are done on Eclipse IDE Version: 2019-12 (4.14.0).

First, configure XDebug with Eclipse. On Fedora you can install it via

sudo dnf install php-xdebug

Check that XDebug remote is enabled with phpinfo() test site, if not add the following line to your php.ini:

xdebug.remote_enable = 1

Now in Eclipse, we first add a server. In Window->Preferences->PHP->Servers add a new server like this:

Document root is our Laravel public folder and base URL is the default host and port of

php artisan serve

Now check your Debug settings in PHP->Debug, select the newly created server and check that XDebug is set as the debugger:

If XDebug is not present here, configure it under PHP->Debug->Debuggers first.

Finally, under General->Web Browser, we select an external web browser to launch our website instead of integrated Eclipse Browser.

We are done with Preferences so close it. Next to the Debug button in main Eclipse toolbar, click on the arrow for the dropdown and select Debug configurations…

Create a new PHP Web Application config like this.

We point the file to public index and map it to root URL (default by artisan serve). Under Debugger tab check that XDebug is selected.

Now go to the terminal and serve your laravel app as you would with

php artisan serve

Finally, run the "web" Debug configuration from Eclipse. Eclipse should go into the Debug mode and open up your site in your selected browser. You can now place your breakpoints in controllers or wherever and things just work like you would expect.

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Apache http to https redirect – use 307

Who knew that a simple thing like HTTP redirects would be so complicated? It turns out clients will just change POST to GET on 301 (Postman, curl, everyone?), same with 302 which really behaves like 303 and that is also an old implementation "bug". Yeah, seriously.

If you have a REST API with POST (or other non-GET) request endpoints (who doesn't?) this behaviour will completely destroy everything.  Many guides (top google results) out there for configuring Apache redirect do not mention this problem. The code of choice would be 308 Permanent Redirect but that is fairly new so I would not risk it, older clients don't know it exists. The only thing left is 307 which does not allow changing methods on redirect – exactly how it should be.

Solution:

<VirtualHost *:80>
    ServerName example.com
    Redirect 307 / https://example.com/
</VirtualHost>

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Setting env variables with hyphen and running a program

Docker compose allows you very unrestrictive naming of your environment variables. It allows you to use hyphen and other "special" characters in variables names. When you need to use these variables in regular shell you are out of luck, bash and many other shells do not allow hyphens in variable names. But this is merely a shell restriction, so how to do it?

With env

env -i 'TZ=Europe/Berlin' \
'PORT=8080' \
'BASE-URL=http://localhost:8080' \
'DB[0]_CONNECTION-URL=jdbc:postgresql://localhost:5432/postgres' \
'DB[0]_USERNAME=username' \
'DB[0]_PASSWORD=password' java -jar myapp.jar

Note that env ignores all inherited env variables so you might need to redefine them:

env -i JAVA_HOME=$JAVA_HOME \
'TZ=Europe/Berlin' \ 
'PORT=8080' java -jar myapp.jar

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Obscure IntelliJ IDEA "bug" with maven jdk profile activation "not working"

Since Java 9 it is popular to activate additional dependencies which were removed from the core JDK through maven profile.

<profiles>
    <profile>
        <id>java9-modules</id>
        <activation>
            <jdk>[9,)</jdk>
        </activation>
        <dependencies>
            <dependency>
                <groupId>javax.xml.bind</groupId>
                <artifactId>jaxb-api</artifactId>
                <version>2.3.1</version>
            </dependency>
        </dependencies>
    </profile>
</profiles>

Using Java 11 , jaxb-api would correctly show in maven dependency tree and Docker packaged application would work correctly with the dependency jar in the classpath.

However, when running the app from IntelliJ it would fall apart with

Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlRootElement

Opening module dependencies in IDE would show that jaxb-api is not on the list of dependencies. IntelliJ is therefore not activating the maven profile correctly even though:

  • maven compiler release is set to 11
  • project and Module SDK is set to Java 11
  • app is run with Java 11

Why is that? There is this snippet in IntelliJ Maven Profiles documentation:

If you use a profile activation with the JDK condition (JDK tags in the POM: <jdk></jdk>), IntelliJ IDEA will use the JDK version of the Maven importer instead of the project's JDK version when syncing the project and resolving dependencies. Also, if you use https certificates, you need to include them manually for the Maven importer as well as for the Maven runner.

Why IntelliJ developers decided to tie the maven profile activation to importer I do not know. It would make much more sense to tie it to Project/Module SDK. If app is being developed with Java 11 target one would expect to activate that profile at build and runtime, not at import time.

With more digging around I managed to find an issue complaining about this problem. Unfortunately the issue is 4 years old now with no apparent activity. Preferrably the default should be changed, if not at least give us an option to choose the source of profile activation in preferences.

 

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Receive only the data your client needs – full dynamic JSON filtering with Jackson

A lot of times JSON returned by your REST API grows to incredibly big structures and data sizes due to business logic complexity that is added over time. Then there are API methods returning a list of objects which can be huge in size. If you serve multiple clients, each one can have different demands on what is and is not needed from that data so backend can't decide on it's own what to prune and what to keep. Ideally, backend would always return full JSON by default but allow clients to specify exactly what they want and have backend adjust the response accordingly.  We can achieve this using the power of Jackson library.

Goal:
– allow REST API clients to decide on their own which parts of JSON to receive (full JSON filtering)

Resources for this tutorial:
– Microprofile or JakartaEE platform (JAX-RS)
– Jackson library
– Java classes (lib) representing your API responses which are serialized to JSON
– some custom code to bring things together

The lib module

First lets define a few classes which represent our JSON responses.

public class Car {

  private Engine engine;

  private List<Wheel> wheels;

  private String brand;

 //Getters and setters..
}

public class Wheel {

  private BigDecimal pressure;

  //Getters and setters..
}

public class Engine {
  
  private int numOfCylinders;

  private int hp;

  //Getters and setters..
}

Our lib serialized to JSON would look something like this:

{
    "engine": {
        "numOfCylinders": 4,
        "hp": 180
    },
    "wheels": [
        {
            "pressure": 30.2
        },
        {
            "pressure": 30.1
        },
        {
            "pressure": 30.0
        },
        {
            "pressure": 30.3
        }
    ],
    "brand": "Jugular"
}

Let's say one of our clients only needs the engine horse power and brand information. We want to be able to specify a query parameter like filter=car:engine,brand;engine:hp and receive the following:

{
    "engine": {
        "hp": 180
    },
    "brand": "Jugular"
}

Step in Jackson

Jackson provides an annotation for such tasks called @JsonFilter. This annotation expects a filter name as a parameter and a named filter must be applied to serialization mapper, for example:

FilterProvider filters = new SimpleFilterProvider()
.addFilter("carFilter", SimpleBeanPropertyFilter.filterOutAllExcept("wheels"));      
String jsonString = mapper.writer(filters)...

As you can see, all we need is already there but is a rather static affair. We need to take this and make it fully dynamic and client driven.

The reason filter needs a name is because each one is bound to a class and attribute filtering is done on that class. What we need to do is transform car:engine,brand into a carFilter and SimpleBeanPropertyFilter.filterOutAllExcept("engine", "brand").

For starters, lets add the filters to our classes:

@JsonFilter("carFilter")
public class Car {}

@JsonFilter("engineFilter")
public class Engine {}

@JsonFilter("wheelFilter")
public class Wheel {}

There is one thing about this that bothers me.. the filter name is a static String so it is refactor unfriendly if class name changes some day. Couldn't we just name the filters by taking a look at the name of the underlying class? Yes we can, by extending Jackson introspection:

public class MyJacksonAnnotationIntrospector extends JacksonAnnotationIntrospector {

    @Override
    public Object findFilterId(Annotated a) {
        JsonFilter ann = _findAnnotation(a, JsonFilter.class);
        if (ann != null) {
            String id = ann.value();
            if (id.length() > 0) {
                return id;
            }
            else {
                try {
                    //Use className+Filter as filter ID if ID is not set, e.g. Car -> carFilter
                    Class<?> clazz = Class.forName(a.getName());
                    return StringUtils.uncapitalize(clazz.getSimpleName())+"Filter";
                } catch (ClassNotFoundException e) {
                    e.printStackTrace();
                }
            }
        }
        return null;
    }
}

With this, any class annotated with @JsonFilter("") will automatically get a filter called classNameFilter. We no longer need to specify filter names and keep them in sync with class names.

Our lib now looks like:

@JsonFilter("")
public class Car {}

@JsonFilter("")
public class Engine {}

@JsonFilter("")
public class Wheel {}

Next step is to transform and apply the query parameters into our filter structure.

First, register a Jackson provider for JAX-RS server:

@Provider
public class JacksonProvider extends JacksonJsonProvider implements ContextResolver<ObjectMapper> {

    private final ObjectMapper mapper;

    public JacksonProvider() {
        mapper = new ObjectMapper();
        mapper.registerModule(new JavaTimeModule());
        mapper.setFilterProvider(new SimpleFilterProvider().setFailOnUnknownId(false));
        mapper.setAnnotationIntrospector(new MyJacksonAnnotationIntrospector());
    }

    @Override
    public ObjectMapper getContext(Class<?> type) {
        return mapper;
    }
}

We register our own introspector and disable failures on unknown filters (in case client filters by something nonexisting).

Provider must be registered in your rest Application.

@ApplicationPath("")
public class MyApplication extends Application {

    @Override
    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();

        classes.add(JacksonProvider.class);

        return classes;
    }
}

Finally, we implement our own MessageBodyWriter to override the default serialization and apply the filters dynamically.

@Provider
@Produces(MediaType.APPLICATION_JSON)
public class JsonFilterProvider implements MessageBodyWriter<Object> {

    @Context
    private UriInfo uriInfo;

    @Context
    private JacksonProvider jsonProvider;

    public static final String PARAM_NAME = "filter";

    public boolean isWriteable(Class<?> aClass, Type type, Annotation[] annotations, MediaType mediaType) {
        return MediaType.APPLICATION_JSON_TYPE.equals(mediaType);
    }

    public long getSize(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType) {
        return -1;
    }

    public void writeTo(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType, MultivaluedMap<String, Object> stringObjectMultivaluedMap,
                        OutputStream outputStream) throws IOException, WebApplicationException {

        String queryParamValue = uriInfo.getQueryParameters().getFirst(PARAM_NAME);
        if (queryParamValue!=null && !queryParamValue.equals("")) {

            SimpleFilterProvider sfp = new SimpleFilterProvider().setFailOnUnknownId(false);

            //We link @JsonFilter annotation with dynamic property filter
            for (Map.Entry<String, Set<String>> entry : getFilterLogic(queryParamValue).entrySet()) {
                sfp.addFilter(entry.getKey() + "Filter", SimpleBeanPropertyFilter.filterOutAllExcept(entry.getValue()));
            }

            jsonProvider.locateMapper(aClass, mediaType).writer(sfp).writeValue(outputStream, object);
        }
        else {
            jsonProvider.locateMapper(aClass, mediaType).writeValue(outputStream, object);
        }
    }

    //Map of object names and set of fields
    private Map<String, Set<String>> getFilterLogic(String paramValue) {
        // ?jsonFilter=car:engine,brand;engine:numOfCylinders
        String[] filters = paramValue.split(";");

        Map<String, Set<String>> filterAndFields = new HashMap<>();

        for (String filterInstance : filters) {
            //car:engine,brand
            List<String> pair = Arrays.asList(filterInstance.split(":"));
            if (pair.size()!=2) {
                throw new RuntimeException();
            }

            Set<String> fields = new HashSet<>(Arrays.asList(pair.get(1).split(",")));
            filterAndFields.put(pair.get(0), fields);
        }

        return filterAndFields;
    }
}

getFilterLogic method assembles the query parameter structure into a map of <String className, Set<String> fields> which is then applied as a Jackson filter.

Finally, we need to register our JsonFilterProvider in our Application as we did with JacksonProvider.

@ApplicationPath("")
public class MyApplication extends Application {

    @Override
    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();

        classes.add(JacksonProvider.class);
        classes.add(JsonFilterProvider.class);

        return classes;
    }
}

One small deficiency with this solution is that once you specify a class with fields to filter, it will be filtered wherever in the nested JSON structure it appears, you can't just filter a specific class at a specific level. Realistically, I think this is a rather minor problem compared to the benefits and the simplicity of the implementation.

Finally a question on documentation. How do you tell the client developer about all the possible filter object names and their attributes? If you use OpenAPI you are 95% there. Simply document that you can filter by model name followed by attribute name. Client developer can easily figure out the names from your OpenAPI specification. The only remaining problem is when you don't want to allow filtering on all classes. In this case my approach would be to document a filterable class in OpenAPI description:

@ApiModel(description = "[Filterable] A car.")

This manual approach of documenting goes against the rest of the paradigm so a real purist would write an OpenAPI extension that would introspect all @JsonFilter annotations and modify the descriptions automatically. But let's leave that for a future blog post.

 

A similar, more advanced and out-of-the-box solution is squiggly, which also uses Jackson under the hood.

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Updating server from Debian Stretch to Buster

Not the most pleasant experience.. I expected a smoother upgrade from Debian team. Upgrading from 8 to 9 was a walk in the park compared to this.

1. MySQL silently fails to start after upgrade

MySQL was left behind at version 5.5 after upgrade and would just not start anymore, probably segfaulting. There is no mysql-server package anymore so I had really no other option but to remove it and install mariadb. In addition, I had trouble running mariadb due to requirement to run mysql_upgrade .. but I couldn't run that because I had no working instance of mysql server running! Installing package default-mysql-server instead somehow solved the problem.

2. phpMyAdmin removed from packages

Not sure how maintaining phpMyAdmin is such a big task that the package was dropped from repos. Regular setup is simply unzipping the code and add an apache config.

3. docker fails due to nftables switch

Docker is such a big and important package these days… and breaks due to iptables no longer being the default. I would expect the upgrade process to not do the switch in this case.

4. apt autoremove anomaly

For some reason running autoremove wanted to purge essential packages such as php, gcc and python3. I did not pay too much attention but alerts started going off when ifup was getting removed and my ssh connection was lost. ?????????? (10 question marks)

 

Luckily this was all of the troubles, dealing with broken wordpress plugins was a relaxing task afterwards.

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

HTPP Accept-Language request header to ResourceBundle

HTTP Accept-Language header is specified by the client to inform the backend what the preferred language for the response is. In Java, the go-to utility for handling localization is ResourceBundle.

What is missing is a standard way to properly convert the input header to the correct ResourceBundle. Specifically,

ResourceBundle i18n = ResourceBundle.getBundle("bundles/translations", request.getLocale());

is insufficient. HttpServletRequest::getLocale() method returns the top preferred locale but if no such ResourceBundle exists, it will fall back to default locale instead of going down the priority list. For example, this header:

Accept-Language: de-DE;q=1.0,fr-FR;q=0.9,en-GB;q=0.8

when backend is missing de-DE translations will return the system default (e.g. en-GB) instead of fr-FR which is the second by priority.

Clients don't usually request languages unknown to backend but it is possible in theory and languages can be automatically added by the client platform (iOS does this) without the client knowing.

We need to iterate the locale chain and find the highest match that exists as a bundle.

Below is a sample in JAX-RS environment.
@RequestScoped
public class Localization {

    @Context
    private HttpServletRequest request;

    private ResourceBundle i18n;

    @PostConstruct
    void postConstruct() {
        //List of locales from Accept-Language header
        List<Locale> locales = Collections.list(request.getLocales());

        if (locales.isEmpty()) {
            //Fall back to default locale
            locales.add(request.getLocale());
        }

        for (Locale locale : locales) {
            try {
                i18n = ResourceBundle.getBundle("bundles/translations", locale);
                if (!languageEquals(i18n.getLocale(), locale)) {
                    //Default fallback detected
                    //The resource bundle that was returned has different language than the one requested, continue
                    //Only language tag is checked, no support for detecting different regions in this sample
                    continue;
                }
                break;
            }
            catch (MissingResourceException ignore) {
            }
        }
    }

    private boolean languageEquals(Locale first, Locale second) {
        return getISO2Language(first).equalsIgnoreCase(getISO2Language(second));
    }

    private String languageGetISO2(Locale locale) {
        String[] localeStrings = (locale.getLanguage().split("[-_]+"));
        return localeStrings[0];
    }

    public ResourceBundle i18n() {
        return this.i18n;
    }
}

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

resize Fedora root partition

Default root partition size on my Fedora installs usually becomes too small down the line to the point I can no longer install packages or perform the upgrades without removing packages or clearing dnf cache.

Therefore I wanted to shrink my home partition and add that space to root.

We can't perform the resize while partitions are mounted so we need to boot in emergency or rescue mode. I first tried the emergency mode but the boot would lock up at Fedora logo so I decided to go with rescue instead.

Once in grub menu, press e to edit. At the end of the line of linux16 or linuxefi entry, add

systemd.unit=rescue.target

Press Ctrl+x to boot with modified parameters. Once in rescue mode, perform the resize:

lvresize -L -10G --resizefs /dev/fedora/home
lvresize -L +10G --resizefs /dev/fedora/root


Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs