All posts by cen

Apache http to https redirect – use 307

Who knew that a simple thing like HTTP redirects would be so complicated? It turns out clients will just change POST to GET on 301 (Postman, curl, everyone?), same with 302 which really behaves like 303 and that is also an old implementation "bug". Yeah, seriously.

If you have a REST API with POST (or other non-GET) request endpoints (who doesn't?) this behaviour will completely destroy everything.  Many guides (top google results) out there for configuring Apache redirect do not mention this problem. The code of choice would be 308 Permanent Redirect but that is fairly new so I would not risk it, older clients don't know it exists. The only thing left is 307 which does not allow changing methods on redirect – exactly how it should be.

Solution:

<VirtualHost *:80>
    ServerName example.com
    Redirect 307 / https://example.com/
</VirtualHost>

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Setting env variables with hyphen and running a program

Docker compose allows you very unrestrictive naming of your environment variables. It allows you to use hyphen and other "special" characters in variables names. When you need to use these variables in regular shell you are out of luck, bash and many other shells do not allow hyphens in variable names. But this is merely a shell restriction, so how to do it?

With env

env -i 'TZ=Europe/Berlin' \
'PORT=8080' \
'BASE-URL=http://localhost:8080' \
'DB[0]_CONNECTION-URL=jdbc:postgresql://localhost:5432/postgres' \
'DB[0]_USERNAME=username' \
'DB[0]_PASSWORD=password' java -jar myapp.jar

Note that env ignores all inherited env variables so you might need to redefine them:

env -i JAVA_HOME=$JAVA_HOME \
'TZ=Europe/Berlin' \ 
'PORT=8080' java -jar myapp.jar

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Obscure IntelliJ IDEA "bug" with maven jdk profile activation "not working"

Since Java 9 it is popular to activate additional dependencies which were removed from the core JDK through maven profile.

<profiles>
    <profile>
        <id>java9-modules</id>
        <activation>
            <jdk>[9,)</jdk>
        </activation>
        <dependencies>
            <dependency>
                <groupId>javax.xml.bind</groupId>
                <artifactId>jaxb-api</artifactId>
                <version>2.3.1</version>
            </dependency>
        </dependencies>
    </profile>
</profiles>

Using Java 11 , jaxb-api would correctly show in maven dependency tree and Docker packaged application would work correctly with the dependency jar in the classpath.

However, when running the app from IntelliJ it would fall apart with

Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlRootElement

Opening module dependencies in IDE would show that jaxb-api is not on the list of dependencies. IntelliJ is therefore not activating the maven profile correctly even though:

  • maven compiler release is set to 11
  • project and Module SDK is set to Java 11
  • app is run with Java 11

Why is that? There is this snippet in IntelliJ Maven Profiles documentation:

If you use a profile activation with the JDK condition (JDK tags in the POM: <jdk></jdk>), IntelliJ IDEA will use the JDK version of the Maven importer instead of the project's JDK version when syncing the project and resolving dependencies. Also, if you use https certificates, you need to include them manually for the Maven importer as well as for the Maven runner.

Why IntelliJ developers decided to tie the maven profile activation to importer I do not know. It would make much more sense to tie it to Project/Module SDK. If app is being developed with Java 11 target one would expect to activate that profile at build and runtime, not at import time.

With more digging around I managed to find an issue complaining about this problem. Unfortunately the issue is 4 years old now with no apparent activity. Preferrably the default should be changed, if not at least give us an option to choose the source of profile activation in preferences.

 

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Receive only the data your client needs – full dynamic JSON filtering with Jackson

A lot of times JSON returned by your REST API grows to incredibly big structures and data sizes due to business logic complexity that is added over time. Then there are API methods returning a list of objects which can be huge in size. If you serve multiple clients, each one can have different demands on what is and is not needed from that data so backend can't decide on it's own what to prune and what to keep. Ideally, backend would always return full JSON by default but allow clients to specify exactly what they want and have backend adjust the response accordingly.  We can achieve this using the power of Jackson library.

Goal:
– allow REST API clients to decide on their own which parts of JSON to receive (full JSON filtering)

Resources for this tutorial:
– Microprofile or JakartaEE platform (JAX-RS)
– Jackson library
– Java classes (lib) representing your API responses which are serialized to JSON
– some custom code to bring things together

The lib module

First lets define a few classes which represent our JSON responses.

public class Car {

  private Engine engine;

  private List<Wheel> wheels;

  private String brand;

 //Getters and setters..
}

public class Wheel {

  private BigDecimal pressure;

  //Getters and setters..
}

public class Engine {
  
  private int numOfCylinders;

  private int hp;

  //Getters and setters..
}

Our lib serialized to JSON would look something like this:

{
    "engine": {
        "numOfCylinders": 4,
        "hp": 180
    },
    "wheels": [
        {
            "pressure": 30.2
        },
        {
            "pressure": 30.1
        },
        {
            "pressure": 30.0
        },
        {
            "pressure": 30.3
        }
    ],
    "brand": "Jugular"
}

Let's say one of our clients only needs the engine horse power and brand information. We want to be able to specify a query parameter like filter=car:engine,brand;engine:hp and receive the following:

{
    "engine": {
        "hp": 180
    },
    "brand": "Jugular"
}

Step in Jackson

Jackson provides an annotation for such tasks called @JsonFilter. This annotation expects a filter name as a parameter and a named filter must be applied to serialization mapper, for example:

FilterProvider filters = new SimpleFilterProvider()
.addFilter("carFilter", SimpleBeanPropertyFilter.filterOutAllExcept("wheels"));      
String jsonString = mapper.writer(filters)...

As you can see, all we need is already there but is a rather static affair. We need to take this and make it fully dynamic and client driven.

The reason filter needs a name is because each one is bound to a class and attribute filtering is done on that class. What we need to do is transform car:engine,brand into a carFilter and SimpleBeanPropertyFilter.filterOutAllExcept("engine", "brand").

For starters, lets add the filters to our classes:

@JsonFilter("carFilter")
public class Car {}

@JsonFilter("engineFilter")
public class Engine {}

@JsonFilter("wheelFilter")
public class Wheel {}

There is one thing about this that bothers me.. the filter name is a static String so it is refactor unfriendly if class name changes some day. Couldn't we just name the filters by taking a look at the name of the underlying class? Yes we can, by extending Jackson introspection:

public class MyJacksonAnnotationIntrospector extends JacksonAnnotationIntrospector {

    @Override
    public Object findFilterId(Annotated a) {
        JsonFilter ann = _findAnnotation(a, JsonFilter.class);
        if (ann != null) {
            String id = ann.value();
            if (id.length() > 0) {
                return id;
            }
            else {
                try {
                    //Use className+Filter as filter ID if ID is not set, e.g. Car -> carFilter
                    Class<?> clazz = Class.forName(a.getName());
                    return StringUtils.uncapitalize(clazz.getSimpleName())+"Filter";
                } catch (ClassNotFoundException e) {
                    e.printStackTrace();
                }
            }
        }
        return null;
    }
}

With this, any class annotated with @JsonFilter("") will automatically get a filter called classNameFilter. We no longer need to specify filter names and keep them in sync with class names.

Our lib now looks like:

@JsonFilter("")
public class Car {}

@JsonFilter("")
public class Engine {}

@JsonFilter("")
public class Wheel {}

Next step is to transform and apply the query parameters into our filter structure.

First, register a Jackson provider for JAX-RS server:

@Provider
public class JacksonProvider extends JacksonJsonProvider implements ContextResolver<ObjectMapper> {

    private final ObjectMapper mapper;

    public JacksonProvider() {
        mapper = new ObjectMapper();
        mapper.registerModule(new JavaTimeModule());
        mapper.setFilterProvider(new SimpleFilterProvider().setFailOnUnknownId(false));
        mapper.setAnnotationIntrospector(new MyJacksonAnnotationIntrospector());
    }

    @Override
    public ObjectMapper getContext(Class<?> type) {
        return mapper;
    }
}

We register our own introspector and disable failures on unknown filters (in case client filters by something nonexisting).

Provider must be registered in your rest Application.

@ApplicationPath("")
public class MyApplication extends Application {

    @Override
    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();

        classes.add(JacksonProvider.class);

        return classes;
    }
}

Finally, we implement our own MessageBodyWriter to override the default serialization and apply the filters dynamically.

@Provider
@Produces(MediaType.APPLICATION_JSON)
public class JsonFilterProvider implements MessageBodyWriter<Object> {

    @Context
    private UriInfo uriInfo;

    @Context
    private JacksonProvider jsonProvider;

    public static final String PARAM_NAME = "filter";

    public boolean isWriteable(Class<?> aClass, Type type, Annotation[] annotations, MediaType mediaType) {
        return MediaType.APPLICATION_JSON_TYPE.equals(mediaType);
    }

    public long getSize(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType) {
        return -1;
    }

    public void writeTo(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType, MultivaluedMap<String, Object> stringObjectMultivaluedMap,
                        OutputStream outputStream) throws IOException, WebApplicationException {

        String queryParamValue = uriInfo.getQueryParameters().getFirst(PARAM_NAME);
        if (queryParamValue!=null && !queryParamValue.equals("")) {

            SimpleFilterProvider sfp = new SimpleFilterProvider().setFailOnUnknownId(false);

            //We link @JsonFilter annotation with dynamic property filter
            for (Map.Entry<String, Set<String>> entry : getFilterLogic(queryParamValue).entrySet()) {
                sfp.addFilter(entry.getKey() + "Filter", SimpleBeanPropertyFilter.filterOutAllExcept(entry.getValue()));
            }

            jsonProvider.locateMapper(aClass, mediaType).writer(sfp).writeValue(outputStream, object);
        }
        else {
            jsonProvider.locateMapper(aClass, mediaType).writeValue(outputStream, object);
        }
    }

    //Map of object names and set of fields
    private Map<String, Set<String>> getFilterLogic(String paramValue) {
        // ?jsonFilter=car:engine,brand;engine:numOfCylinders
        String[] filters = paramValue.split(";");

        Map<String, Set<String>> filterAndFields = new HashMap<>();

        for (String filterInstance : filters) {
            //car:engine,brand
            List<String> pair = Arrays.asList(filterInstance.split(":"));
            if (pair.size()!=2) {
                throw new RuntimeException();
            }

            Set<String> fields = new HashSet<>(Arrays.asList(pair.get(1).split(",")));
            filterAndFields.put(pair.get(0), fields);
        }

        return filterAndFields;
    }
}

getFilterLogic method assembles the query parameter structure into a map of <String className, Set<String> fields> which is then applied as a Jackson filter.

Finally, we need to register our JsonFilterProvider in our Application as we did with JacksonProvider.

@ApplicationPath("")
public class MyApplication extends Application {

    @Override
    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();

        classes.add(JacksonProvider.class);
        classes.add(JsonFilterProvider.class);

        return classes;
    }
}

One small deficiency with this solution is that once you specify a class with fields to filter, it will be filtered wherever in the nested JSON structure it appears, you can't just filter a specific class at a specific level. Realistically, I think this is a rather minor problem compared to the benefits and the simplicity of the implementation.

Finally a question on documentation. How do you tell the client developer about all the possible filter object names and their attributes? If you use OpenAPI you are 95% there. Simply document that you can filter by model name followed by attribute name. Client developer can easily figure out the names from your OpenAPI specification. The only remaining problem is when you don't want to allow filtering on all classes. In this case my approach would be to document a filterable class in OpenAPI description:

@ApiModel(description = "[Filterable] A car.")

This manual approach of documenting goes against the rest of the paradigm so a real purist would write an OpenAPI extension that would introspect all @JsonFilter annotations and modify the descriptions automatically. But let's leave that for a future blog post.

 

A similar, more advanced and out-of-the-box solution is squiggly, which also uses Jackson under the hood.

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Updating server from Debian Stretch to Buster

Not the most pleasant experience.. I expected a smoother upgrade from Debian team. Upgrading from 8 to 9 was a walk in the park compared to this.

1. MySQL silently fails to start after upgrade

MySQL was left behind at version 5.5 after upgrade and would just not start anymore, probably segfaulting. There is no mysql-server package anymore so I had really no other option but to remove it and install mariadb. In addition, I had trouble running mariadb due to requirement to run mysql_upgrade .. but I couldn't run that because I had no working instance of mysql server running! Installing package default-mysql-server instead somehow solved the problem.

2. phpMyAdmin removed from packages

Not sure how maintaining phpMyAdmin is such a big task that the package was dropped from repos. Regular setup is simply unzipping the code and add an apache config.

3. docker fails due to nftables switch

Docker is such a big and important package these days… and breaks due to iptables no longer being the default. I would expect the upgrade process to not do the switch in this case.

4. apt autoremove anomaly

For some reason running autoremove wanted to purge essential packages such as php, gcc and python3. I did not pay too much attention but alerts started going off when ifup was getting removed and my ssh connection was lost. ?????????? (10 question marks)

 

Luckily this was all of the troubles, dealing with broken wordpress plugins was a relaxing task afterwards.

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

HTPP Accept-Language request header to ResourceBundle

HTTP Accept-Language header is specified by the client to inform the backend what the preferred language for the response is. In Java, the go-to utility for handling localization is ResourceBundle.

What is missing is a standard way to properly convert the input header to the correct ResourceBundle. Specifically,

ResourceBundle i18n = ResourceBundle.getBundle("bundles/translations", request.getLocale());

is insufficient. HttpServletRequest::getLocale() method returns the top preferred locale but if no such ResourceBundle exists, it will fall back to default locale instead of going down the priority list. For example, this header:

Accept-Language: de-DE;q=1.0,fr-FR;q=0.9,en-GB;q=0.8

when backend is missing de-DE translations will return the system default (e.g. en-GB) instead of fr-FR which is the second by priority.

Clients don't usually request languages unknown to backend but it is possible in theory and languages can be automatically added by the client platform (iOS does this) without the client knowing.

We need to iterate the locale chain and find the highest match that exists as a bundle.

Below is a sample in JAX-RS environment.
@RequestScoped
public class Localization {

    @Context
    private HttpServletRequest request;

    private ResourceBundle i18n;

    @PostConstruct
    void postConstruct() {
        //List of locales from Accept-Language header
        List<Locale> locales = Collections.list(request.getLocales());

        if (locales.isEmpty()) {
            //Fall back to default locale
            locales.add(request.getLocale());
        }

        for (Locale locale : locales) {
            try {
                i18n = ResourceBundle.getBundle("bundles/translations", locale);
                if (!languageEquals(i18n.getLocale(), locale)) {
                    //Default fallback detected
                    //The resource bundle that was returned has different language than the one requested, continue
                    //Only language tag is checked, no support for detecting different regions in this sample
                    continue;
                }
                break;
            }
            catch (MissingResourceException ignore) {
            }
        }
    }

    private boolean languageEquals(Locale first, Locale second) {
        return getISO2Language(first).equalsIgnoreCase(getISO2Language(second));
    }

    private String languageGetISO2(Locale locale) {
        String[] localeStrings = (locale.getLanguage().split("[-_]+"));
        return localeStrings[0];
    }

    public ResourceBundle i18n() {
        return this.i18n;
    }
}

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

resize Fedora root partition

Default root partition size on my Fedora installs usually becomes too small down the line to the point I can no longer install packages or perform the upgrades without removing packages or clearing dnf cache.

Therefore I wanted to shrink my home partition and add that space to root.

We can't perform the resize while partitions are mounted so we need to boot in emergency or rescue mode. I first tried the emergency mode but the boot would lock up at Fedora logo so I decided to go with rescue instead.

Once in grub menu, press e to edit. At the end of the line of linux16 or linuxefi entry, add

systemd.unit=rescue.target

Press Ctrl+x to boot with modified parameters. Once in rescue mode, perform the resize:

lvresize -L -10G --resizefs /dev/fedora/home
lvresize -L +10G --resizefs /dev/fedora/root


Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

That moment when you need to look up definition of C++ for loop

I was getting a segfault on an old piece of code which I maintain. The culprit was pinpointed to this:

bool found = false;
vector<string> :: iterator i;
for (i = v.begin(); !found && i != v.end(); ++i) {
    if (name == *i) {
        found = true;
    }
}
if (found) {
   v.erase( i ); // <-- segfault here
}

I went through this piece if code at least 10 times without noticing the problem. The snippet is simple enough.. when match is found, set found to true and that breaks the loop since loop condition now evaluates to false. The iterator remains at the position of matched element.

WRONG.

What we are actually getting is iterator+1.

What we don't see directly from the code is that increment happens before the condition is evaluated for the next loop, giving us iterator+1 which causes a segfault if match is found on last element.

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Tenant resource authorization in JAX-RS

You have a book REST resource and each book has an owner. Only the owner of the book can access an owned book. JAX-RS specification has no answer to this problem since it only provides a role based security with @RolesAllowed annotation. It is unfortunate JavaEE spec does not offer at last some interfaces which we could then imlement for this purpose.. we need to roll our own. There are many ways this can be achieved, I will present one way of doing it.

Owned JPA entities extend a common class

All owned entities should extend a common class, let's call it OwnedEntity.

@Entity
@Table(name = "books")
public class BookEntity extends OwnedEntity {}
@MappedSuperclass
public class OwnedEntity {

    @Nullable
    @Column(name="owner_id")
    protected String ownerId;

}

Protect owned resources with an interceptor

Create an interceptor which we will use on each owned resource that will check the owner of the entity against the authorized user. We pass the owned entity as a parameter. We will need this information to be able to fetch the correct JPA entity in the interceptor implementation.

@InterceptorBinding
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
public @interface AuthorizeOwner {

    @SuppressWarnings("rawtypes")
    @Nonbinding Class<? extends OwnedEntity> type() default OwnedEntity.class;
}

We protect an owned resource with this interceptor

@GET
@Path("/{id}")
@AuthorizeOwner(type = BookEntity.class)
public Response findBookById(@PathParam("id") UUID id) {}

Interceptor implementation

@AuthorizeOwner
@Interceptor
@Priority(2001)
public class AuthorizeOwnerInterceptor {

  @Context
  private SecurityContext sc;

  @Inject
  private EntityManager em;

  @AroundInvoke
  public Object methodEntry(InvocationContext ctx) throws Exception {

    Annotation t = ctx.getMethod().getAnnotation(AuthorizeOwner.class);

    Object[] params = ctx.getParameters();
    //ID must be first parameter and of type UUID
    if (params.length>0 && params[0] instanceof UUID) {
      String id = ((UUID)params[0]).toString();

      OwnedEntity object = em.find(((AuthorizeOwner) t).type(), id);
      if (object!=null && !object.getOwnerId().equals(sc.getUserPrincipal().getId()))             {
        throw new ForbiddenException();
      }
    }
    else {
      //Illegal use
      throw new InternalServerErrorException();
    }

    return ctx.proceed();
  }
}

Make sure the priority of this interceptor is lower than your security interceptor, since a valid authenticated user should already be present before it.

The limitation of this interceptor is that it can only protect ID based resources of type /resource/:id. For list resources, use seperate logic to insert an additional WHERE filter by owner ID to TypedQuery/Criteria query used for list fetching.

Second limitation is that the entity ID should always be declared first in resource method. Another way would be to enforce the name "id" as the parameter name representing the entity ID, but this requires additional reflection info to get method parameter names.

The example here uses SecurityContext to retreive the authorized user. You might need to inject your own context or parsed JWT token to retreive the needed identificator, depending on what you store in your database as owner ID (user UUID, email etc).

An improvement of this interceptor is to check the roles in security context and skip the owner check if role is an ADMIN or similar, since we probably want to allow admins to access all resources.

So how useful is this?

Good:

+protects owned resources with a simple annotation

Not so good:

-only protects ID based resources, you still need a seperate mechanism for lists
-only protects the base entity, not nested owned relations (/book/:id/somethingElse/:id2), which would mean child entity can have different owner than parent and client must be prevented from access of the child. I did not yet stumble upon such a requirement though.
-forcing method parameter position or consistent naming in resource methods Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Creating a new torrent and seeding with Transmission

You have setup a Transmission server on your Linux box together with Transmisison Web or something along those lines and now you are wondering.. how can I actually seed a NEW file?

I couldn't find a straightforward answer on the web so here is the short tutorial:

  1. Upload your file to your transmission download directory
  2. cd to that directory and create a torrent file (lets say the file you uploaded was called  myfile.rar):
    transmission-create -o myfile.torrent -c "this is my file comment" -t tracker1 -t tracker2 -t tracker3 myfile.rar

    Replace tracker1, tracker3, tracker3, …trackerN with a bunch of trackers. Better specify more than one in case they go down. Here is a cool little list of public trackers.

  3. Download the new .torrent file you just created, open Transmission Web and add the torrent. Since the file already exists in download directory, Transmisison will just revalidate the data and start seeding. *mind blown*
  4. Distribute the torrent file to your people or generate a magnet link with
    transmission-show -m myfile.torrent

     

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs