Receive only the data your client needs – full dynamic JSON filtering with Jackson

A lot of times JSON returned by your REST API grows to incredibly big structures and data sizes due to business logic complexity that is added over time. Then there are API methods returning a list of objects which can be huge in size. If you serve multiple clients, each one can have different demands on what is and is not needed from that data so backend can't decide on it's own what to prune and what to keep. Ideally, backend would always return full JSON by default but allow clients to specify exactly what they want and have backend adjust the response accordingly.  We can achieve this using the power of Jackson library.

Goal:
– allow REST API clients to decide on their own which parts of JSON to receive (full JSON filtering)

Resources for this tutorial:
– Microprofile or JakartaEE platform (JAX-RS)
– Jackson library
– Java classes (lib) representing your API responses which are serialized to JSON
– some custom code to bring things together

The lib module

First lets define a few classes which represent our JSON responses.

public class Car {

  private Engine engine;

  private List<Wheel> wheels;

  private String brand;

 //Getters and setters..
}

public class Wheel {

  private BigDecimal pressure;

  //Getters and setters..
}

public class Engine {
  
  private int numOfCylinders;

  private int hp;

  //Getters and setters..
}

Our lib serialized to JSON would look something like this:

{
    "engine": {
        "numOfCylinders": 4,
        "hp": 180
    },
    "wheels": [
        {
            "pressure": 30.2
        },
        {
            "pressure": 30.1
        },
        {
            "pressure": 30.0
        },
        {
            "pressure": 30.3
        }
    ],
    "brand": "Jugular"
}

Let's say one of our clients only needs the engine horse power and brand information. We want to be able to specify a query parameter like filter=car:engine,brand;engine:hp and receive the following:

{
    "engine": {
        "hp": 180
    },
    "brand": "Jugular"
}

Step in Jackson

Jackson provides an annotation for such tasks called @JsonFilter. This annotation expects a filter name as a parameter and a named filter must be applied to serialization mapper, for example:

FilterProvider filters = new SimpleFilterProvider()
.addFilter("carFilter", SimpleBeanPropertyFilter.filterOutAllExcept("wheels"));      
String jsonString = mapper.writer(filters)...

As you can see, all we need is already there but is a rather static affair. We need to take this and make it fully dynamic and client driven.

The reason filter needs a name is because each one is bound to a class and attribute filtering is done on that class. What we need to do is transform car:engine,brand into a carFilter and SimpleBeanPropertyFilter.filterOutAllExcept("engine", "brand").

For starters, lets add the filters to our classes:

@JsonFilter("carFilter")
public class Car {}

@JsonFilter("engineFilter")
public class Engine {}

@JsonFilter("wheelFilter")
public class Wheel {}

There is one thing about this that bothers me.. the filter name is a static String so it is refactor unfriendly if class name changes some day. Couldn't we just name the filters by taking a look at the name of the underlying class? Yes we can, by extending Jackson introspection:

public class MyJacksonAnnotationIntrospector extends JacksonAnnotationIntrospector {

    @Override
    public Object findFilterId(Annotated a) {
        JsonFilter ann = _findAnnotation(a, JsonFilter.class);
        if (ann != null) {
            String id = ann.value();
            if (id.length() > 0) {
                return id;
            }
            else {
                try {
                    //Use className+Filter as filter ID if ID is not set, e.g. Car -> carFilter
                    Class<?> clazz = Class.forName(a.getName());
                    return StringUtils.uncapitalize(clazz.getSimpleName())+"Filter";
                } catch (ClassNotFoundException e) {
                    e.printStackTrace();
                }
            }
        }
        return null;
    }
}

With this, any class annotated with @JsonFilter("") will automatically get a filter called classNameFilter. We no longer need to specify filter names and keep them in sync with class names.

Our lib now looks like:

@JsonFilter("")
public class Car {}

@JsonFilter("")
public class Engine {}

@JsonFilter("")
public class Wheel {}

Next step is to transform and apply the query parameters into our filter structure.

First, register a Jackson provider for JAX-RS server:

@Provider
public class JacksonProvider extends JacksonJsonProvider implements ContextResolver<ObjectMapper> {

    private final ObjectMapper mapper;

    public JacksonProvider() {
        mapper = new ObjectMapper();
        mapper.registerModule(new JavaTimeModule());
        mapper.setFilterProvider(new SimpleFilterProvider().setFailOnUnknownId(false));
        mapper.setAnnotationIntrospector(new MyJacksonAnnotationIntrospector());
    }

    @Override
    public ObjectMapper getContext(Class<?> type) {
        return mapper;
    }
}

We register our own introspector and disable failures on unknown filters (in case client filters by something nonexisting).

Provider must be registered in your rest Application.

@ApplicationPath("")
public class MyApplication extends Application {

    @Override
    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();

        classes.add(JacksonProvider.class);

        return classes;
    }
}

Finally, we implement our own MessageBodyWriter to override the default serialization and apply the filters dynamically.

@Provider
@Produces(MediaType.APPLICATION_JSON)
public class JsonFilterProvider implements MessageBodyWriter<Object> {

    @Context
    private UriInfo uriInfo;

    @Context
    private JacksonProvider jsonProvider;

    public static final String PARAM_NAME = "filter";

    public boolean isWriteable(Class<?> aClass, Type type, Annotation[] annotations, MediaType mediaType) {
        return MediaType.APPLICATION_JSON_TYPE.equals(mediaType);
    }

    public long getSize(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType) {
        return -1;
    }

    public void writeTo(Object object, Class<?> aClass, Type type, Annotation[] annotations,
                        MediaType mediaType, MultivaluedMap<String, Object> stringObjectMultivaluedMap,
                        OutputStream outputStream) throws IOException, WebApplicationException {

        String queryParamValue = uriInfo.getQueryParameters().getFirst(PARAM_NAME);
        if (queryParamValue!=null && !queryParamValue.equals("")) {

            SimpleFilterProvider sfp = new SimpleFilterProvider().setFailOnUnknownId(false);

            //We link @JsonFilter annotation with dynamic property filter
            for (Map.Entry<String, Set<String>> entry : getFilterLogic(queryParamValue).entrySet()) {
                sfp.addFilter(entry.getKey() + "Filter", SimpleBeanPropertyFilter.filterOutAllExcept(entry.getValue()));
            }

            jsonProvider.locateMapper(aClass, mediaType).writer(sfp).writeValue(outputStream, object);
        }
        else {
            jsonProvider.locateMapper(aClass, mediaType).writeValue(outputStream, object);
        }
    }

    //Map of object names and set of fields
    private Map<String, Set<String>> getFilterLogic(String paramValue) {
        // ?jsonFilter=car:engine,brand;engine:numOfCylinders
        String[] filters = paramValue.split(";");

        Map<String, Set<String>> filterAndFields = new HashMap<>();

        for (String filterInstance : filters) {
            //car:engine,brand
            List<String> pair = Arrays.asList(filterInstance.split(":"));
            if (pair.size()!=2) {
                throw new RuntimeException();
            }

            Set<String> fields = new HashSet<>(Arrays.asList(pair.get(1).split(",")));
            filterAndFields.put(pair.get(0), fields);
        }

        return filterAndFields;
    }
}

getFilterLogic method assembles the query parameter structure into a map of <String className, Set<String> fields> which is then applied as a Jackson filter.

Finally, we need to register our JsonFilterProvider in our Application as we did with JacksonProvider.

@ApplicationPath("")
public class MyApplication extends Application {

    @Override
    public Set<Class<?>> getClasses() {

        Set<Class<?>> classes = new HashSet<>();

        classes.add(JacksonProvider.class);
        classes.add(JsonFilterProvider.class);

        return classes;
    }
}

One small deficiency with this solution is that once you specify a class with fields to filter, it will be filtered wherever in the nested JSON structure it appears, you can't just filter a specific class at a specific level. Realistically, I think this is a rather minor problem compared to the benefits and the simplicity of the implementation.

Finally a question on documentation. How do you tell the client developer about all the possible filter object names and their attributes? If you use OpenAPI you are 95% there. Simply document that you can filter by model name followed by attribute name. Client developer can easily figure out the names from your OpenAPI specification. The only remaining problem is when you don't want to allow filtering on all classes. In this case my approach would be to document a filterable class in OpenAPI description:

@ApiModel(description = "[Filterable] A car.")

This manual approach of documenting goes against the rest of the paradigm so a real purist would write an OpenAPI extension that would introspect all @JsonFilter annotations and modify the descriptions automatically. But let's leave that for a future blog post.

 

A similar, more advanced and out-of-the-box solution is squiggly, which also uses Jackson under the hood.

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Updating server from Debian Stretch to Buster

Not the most pleasant experience.. I expected a smoother upgrade from Debian team. Upgrading from 8 to 9 was a walk in the park compared to this.

1. MySQL silently fails to start after upgrade

MySQL was left behind at version 5.5 after upgrade and would just not start anymore, probably segfaulting. There is no mysql-server package anymore so I had really no other option but to remove it and install mariadb. In addition, I had trouble running mariadb due to requirement to run mysql_upgrade .. but I couldn't run that because I had no working instance of mysql server running! Installing package default-mysql-server instead somehow solved the problem.

2. phpMyAdmin removed from packages

Not sure how maintaining phpMyAdmin is such a big task that the package was dropped from repos. Regular setup is simply unzipping the code and add an apache config.

3. docker fails due to nftables switch

Docker is such a big and important package these days… and breaks due to iptables no longer being the default. I would expect the upgrade process to not do the switch in this case.

4. apt autoremove anomaly

For some reason running autoremove wanted to purge essential packages such as php, gcc and python3. I did not pay too much attention but alerts started going off when ifup was getting removed and my ssh connection was lost. ?????????? (10 question marks)

 

Luckily this was all of the troubles, dealing with broken wordpress plugins was a relaxing task afterwards.

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

HTPP Accept-Language request header to ResourceBundle

HTTP Accept-Language header is specified by the client to inform the backend what the preferred language for the response is. In Java, the go-to utility for handling localization is ResourceBundle.

What is missing is a standard way to properly convert the input header to the correct ResourceBundle. Specifically,

ResourceBundle i18n = ResourceBundle.getBundle("bundles/translations", request.getLocale());

is insufficient. HttpServletRequest::getLocale() method returns the top preferred locale but if no such ResourceBundle exists, it will fall back to default locale instead of going down the priority list. For example, this header:

Accept-Language: de-DE;q=1.0,fr-FR;q=0.9,en-GB;q=0.8

when backend is missing de-DE translations will return the system default (e.g. en-GB) instead of fr-FR which is the second by priority.

Clients don't usually request languages unknown to backend but it is possible in theory and languages can be automatically added by the client platform (iOS does this) without the client knowing.

We need to iterate the locale chain and find the highest match that exists as a bundle.

Below is a sample in JAX-RS environment.
@RequestScoped
public class Localization {

    @Context
    private HttpServletRequest request;

    private ResourceBundle i18n;

    @PostConstruct
    void postConstruct() {
        //List of locales from Accept-Language header
        List<Locale> locales = Collections.list(request.getLocales());

        if (locales.isEmpty()) {
            //Fall back to default locale
            locales.add(request.getLocale());
        }

        for (Locale locale : locales) {
            try {
                i18n = ResourceBundle.getBundle("bundles/translations", locale);
                if (!languageEquals(i18n.getLocale(), locale)) {
                    //Default fallback detected
                    //The resource bundle that was returned has different language than the one requested, continue
                    //Only language tag is checked, no support for detecting different regions in this sample
                    continue;
                }
                break;
            }
            catch (MissingResourceException ignore) {
            }
        }
    }

    private boolean languageEquals(Locale first, Locale second) {
        return getISO2Language(first).equalsIgnoreCase(getISO2Language(second));
    }

    private String languageGetISO2(Locale locale) {
        String[] localeStrings = (locale.getLanguage().split("[-_]+"));
        return localeStrings[0];
    }

    public ResourceBundle i18n() {
        return this.i18n;
    }
}

 

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

resize Fedora root partition

Default root partition size on my Fedora installs usually becomes too small down the line to the point I can no longer install packages or perform the upgrades without removing packages or clearing dnf cache.

Therefore I wanted to shrink my home partition and add that space to root.

We can't perform the resize while partitions are mounted so we need to boot in emergency or rescue mode. I first tried the emergency mode but the boot would lock up at Fedora logo so I decided to go with rescue instead.

Once in grub menu, press e to edit. At the end of the line of linux16 or linuxefi entry, add

systemd.unit=rescue.target

Press Ctrl+x to boot with modified parameters. Once in rescue mode, perform the resize:

lvresize -L -10G --resizefs /dev/fedora/home
lvresize -L +10G --resizefs /dev/fedora/root


Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

That moment when you need to look up definition of C++ for loop

I was getting a segfault on an old piece of code which I maintain. The culprit was pinpointed to this:

bool found = false;
vector<string> :: iterator i;
for (i = v.begin(); !found && i != v.end(); ++i) {
    if (name == *i) {
        found = true;
    }
}
if (found) {
   v.erase( i ); // <-- segfault here
}

I went through this piece if code at least 10 times without noticing the problem. The snippet is simple enough.. when match is found, set found to true and that breaks the loop since loop condition now evaluates to false. The iterator remains at the position of matched element.

WRONG.

What we are actually getting is iterator+1.

What we don't see directly from the code is that increment happens before the condition is evaluated for the next loop, giving us iterator+1 which causes a segfault if match is found on last element.

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Tenant resource authorization in JAX-RS

You have a book REST resource and each book has an owner. Only the owner of the book can access an owned book. JAX-RS specification has no answer to this problem since it only provides a role based security with @RolesAllowed annotation. It is unfortunate JavaEE spec does not offer at last some interfaces which we could then imlement for this purpose.. we need to roll our own. There are many ways this can be achieved, I will present one way of doing it.

Owned JPA entities extend a common class

All owned entities should extend a common class, let's call it OwnedEntity.

@Entity
@Table(name = "books")
public class BookEntity extends OwnedEntity {}
@MappedSuperclass
public class OwnedEntity {

    @Nullable
    @Column(name="owner_id")
    protected String ownerId;

}

Protect owned resources with an interceptor

Create an interceptor which we will use on each owned resource that will check the owner of the entity against the authorized user. We pass the owned entity as a parameter. We will need this information to be able to fetch the correct JPA entity in the interceptor implementation.

@InterceptorBinding
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
public @interface AuthorizeOwner {

    @SuppressWarnings("rawtypes")
    @Nonbinding Class<? extends OwnedEntity> type() default OwnedEntity.class;
}

We protect an owned resource with this interceptor

@GET
@Path("/{id}")
@AuthorizeOwner(type = BookEntity.class)
public Response findBookById(@PathParam("id") UUID id) {}

Interceptor implementation

@AuthorizeOwner
@Interceptor
@Priority(2001)
public class AuthorizeOwnerInterceptor {

  @Context
  private SecurityContext sc;

  @Inject
  private EntityManager em;

  @AroundInvoke
  public Object methodEntry(InvocationContext ctx) throws Exception {

    Annotation t = ctx.getMethod().getAnnotation(AuthorizeOwner.class);

    Object[] params = ctx.getParameters();
    //ID must be first parameter and of type UUID
    if (params.length>0 && params[0] instanceof UUID) {
      String id = ((UUID)params[0]).toString();

      OwnedEntity object = em.find(((AuthorizeOwner) t).type(), id);
      if (object!=null && !object.getOwnerId().equals(sc.getUserPrincipal().getId()))             {
        throw new ForbiddenException();
      }
    }
    else {
      //Illegal use
      throw new InternalServerErrorException();
    }

    return ctx.proceed();
  }
}

Make sure the priority of this interceptor is lower than your security interceptor, since a valid authenticated user should already be present before it.

The limitation of this interceptor is that it can only protect ID based resources of type /resource/:id. For list resources, use seperate logic to insert an additional WHERE filter by owner ID to TypedQuery/Criteria query used for list fetching.

Second limitation is that the entity ID should always be declared first in resource method. Another way would be to enforce the name "id" as the parameter name representing the entity ID, but this requires additional reflection info to get method parameter names.

The example here uses SecurityContext to retreive the authorized user. You might need to inject your own context or parsed JWT token to retreive the needed identificator, depending on what you store in your database as owner ID (user UUID, email etc).

An improvement of this interceptor is to check the roles in security context and skip the owner check if role is an ADMIN or similar, since we probably want to allow admins to access all resources.

So how useful is this?

Good:

+protects owned resources with a simple annotation

Not so good:

-only protects ID based resources, you still need a seperate mechanism for lists
-only protects the base entity, not nested owned relations (/book/:id/somethingElse/:id2), which would mean child entity can have different owner than parent and client must be prevented from access of the child. I did not yet stumble upon such a requirement though.
-forcing method parameter position or consistent naming in resource methods Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Creating a new torrent and seeding with Transmission

You have setup a Transmission server on your Linux box together with Transmisison Web or something along those lines and now you are wondering.. how can I actually seed a NEW file?

I couldn't find a straightforward answer on the web so here is the short tutorial:

  1. Upload your file to your transmission download directory
  2. cd to that directory and create a torrent file (lets say the file you uploaded was called  myfile.rar):
    transmission-create -o myfile.torrent -c "this is my file comment" -t tracker1 -t tracker2 -t tracker3 myfile.rar

    Replace tracker1, tracker3, tracker3, …trackerN with a bunch of trackers. Better specify more than one in case they go down. Here is a cool little list of public trackers.

  3. Download the new .torrent file you just created, open Transmission Web and add the torrent. Since the file already exists in download directory, Transmisison will just revalidate the data and start seeding. *mind blown*
  4. Distribute the torrent file to your people or generate a magnet link with
    transmission-show -m myfile.torrent

     

Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Ubuntu 18.04 on MacBook Pro 11.5 – a sad state of affairs

There was an extra MacBook Pro 11.5 lurking around so I decided to install Ubuntu 18.04 on it and try to setup a usable workstation.

A culmination of several issues prompted me to not pursue this setup further. Linux drivers and MacBook hardware just don't play along very well.

Display flickering/corruption on main display

The bottom part of the HiDPI screen is experiencing some kind of flickering as tracked by this bug. Changing desktop environment, distros and X server configuration did not result in any improvements. For a moment Wayland seemed to have solved the issue only to reappear on next boot.

Since I also connected 3 external monitors this was not a deal breaker. External monitors did not display this issue.

Fan going at 100% most of the time

Even at idle or low load the fans would spin at 100%. Thermald was not doing it's job for whatever reason. It is hard to say why since most temperature sensors seem to be working fine and report acceptable temperatures.

I found a simple but great project called mbpfan which stopped the fans immediately after being started and still kept temperatures seemingly in check. I increased the minimum fan speed in mbpfan config just to avoid any potential overheating problems. With this setup I was getting 70-80 degrees with no overheating problems and a quiet fan.

CPU is in constant low frequency state (dealbreaker)

After installing cpufreq gnome extension I figured out that CPU is always at 800MHz. Mbpfan was not the cause since the same lack of scaling appeared when it was turned off.

First I tried to disable Intel p_state driver but the lack of scaling continued. Using userspace driver in cpufreq, I was unable to change min/max frequencies or force a specific frequency via cpupower.

As per ArchWiki, I gifured out that BIOS was enforcing this state via

cat /sys/devices/system/cpu/cpu0/cpufreq/bios_limit

After ignoring ppc via

echo 1 > /sys/module/processor/parameters/ignore_ppc

the CPU instantly started to scale as expected. Unfortunately this was not the final solution since the low state  would randomly reappear again for long periods of time with small time windows of scaling working as expected. Therefore, even with ignore_ppc I would still get 800MHz most of the time with temps reported around 70 degrees.

In this state Gnome Shell would lag and everything was half-usable.

Something in hardware was throttling CPU and I wasn't able to overcome it.

Bcmwl driver very spotty

WiFi bcmwl driver is very spotty. It would connect to Android hotspot no problem but it failed to connect to WiFi router. Small sample of 50% reliability.

Display positions not remembered after reboot

I had to rearrange the external monitors on each reboot since Gnome would not remember their positions. I had to come up with xrandr script to run after login to remedy this sad state of affairs.

No per-monitor scaling

Gnome still does not support setting the scale factors per monitor. Again, I had to come up with xrandr script to achieve 200% scaling on HiDPI and regular scaling on external monitors (1900×1200).

Broken scaling under Wayland

Apparently if you set the scale factor to default in Wayland session, things should "just work" across HiDPI and non-HiDPI displays. Don't believe these people, they are liars.

I set the scaling to default but that made HiDPI desktop tiny while external monitors were fine. Increasing the scaling to 200% made HiDPI fine while external monitors scaled also.

There is also no xrandr under Wayland so you can't help yourself with that.

Broken rendering of electron apps on external monitors

If using Postman on an external monitor, parts of the dialog boxes would simply disappear, making the tool unusable. Using Postman on main display did not have this issue. Weird.

 

The bottom line: get a Dell or a Lenovo for your Linux workstation needs. Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs

Xrandr framebuffer and per-display scaling

Ubuntu 18.04 LTS came out recently with Gnome desktop as default. Unfortunately even in 2018, it won't remember the external monitor positioning after reboot and has no support in display settings to set per-display scaling. Year of the Linux desktop, anyone?

Xrandr is a powerful Linux tool to manipulate displays. Unfortunately, the man page is very sparse on information with badly explained flags and various Linux guides are no better.

This example will create a triple monitor setup with HiDPI laptop display at the bottom of the array.

Xrandr command:

xrandr --fbmm 11520x4200 --output eDP --pos 4320x2400 --mode 2880x1800 --scale 1x1 --primary --output DisplayPort-1 --pos 0x0 --mode 1920x1200 --scale 2x2 --output DisplayPort-0 --pos 3840x0 --mode 1920x1200 --scale 2x2 --output HDMI-0 --pos 7680x0 --mode 1920x1200 --scale 2x2

 

Gnome scaling is set to 200% so our HiDPI native display looks normal. Unfortunately this also means non-HiDPI displays have this scaling applied which is not what we want.

Framebuffer is the full outer rectangle which must be able to contain our display setup as a whole.

Since external displays are scaled 2×2 (zoom out), they take twice the size of their actual resolution in our framebuffer. Meaning their sizes in fb are actually 3840×2400.

Y of the framebuffer is therefore 2400+1800=4200 (HiDPI display is scaled 1×1 so it takes the same amount of space in the framebuffer as it's resolution).

X of the framebuffer is 3*3840=11520.

–fbmm specifies the full framebuffer size

–pos specifies the position of the display in the buffer. 0x0 position starts on top-left corner.

–mode sets the actual display resolution

–output specifies the display output (run xrandr to list all available)

–scale specifies "zooming" in (<1) or out (>1)

 

The end result has some invisible area on the bottom-left and bottom-right corners so it is not ideal. I have yet to figure out if it is possible to specify fencing around that area.

The HiDPI display is also not perfectly aligned with the top display but that could be corrected with fractional scaling. It didn't really bother me to fiddle with that.

Finally, you should run this command with a startup script so you get the correct monitor positioning and scaling after login. Cen
GitHub
Eurobattle.net
Lagabuse.com
Bnetdocs