User blog:Jrogan92/GDPR



As the GDPR deadline has come and past, our team has reflected on its efforts of building the front-end library. The task was to build a front-end modal which FANDOM would integrate across its web products. For obvious reasons (a $20 million fine), we set a strict zero tolerance level for any failures. Our goal from the outset was to create the most well tested piece of code at FANDOM.

The first problem we had to tackle was being able to reliably detect where in the world the user currently viewing a given page is visiting from. Our network provider Fastly provides a Geo Detection service through its Varnish servers. This exposes a wealth of information, but most importantly, an ISO 3166-2 country code associated with the client IP address. We map that value onto our accepted list of EU countries required under GDPR, and if there is a match, we show the modal.

We also wanted to track users who accept and reject the new terms. For the rejection, we cannot track any personally identifiable information, but we can still track that they rejected the terms. When a user chooses to reject, we set a cookie for a period of time (currently 31 days) that will prompt the user again to confirm they want to opt out.

This tracking became essential. Directly after we released, we noticed a high number of rejections from non-EU countries. The dominant theory was that users who had cookies disabled would see the modal every time they loaded the page. Obviously, this would be annoying and would lead to rejection. We quickly addressed this edge case and defaulted to not displaying the modal to these users.

Tech Stack
We use a tech stack that mirrored what we are using in our team's main project, FANDOM Creator. However, we had to be aware of the differences between an application and a library. React is our UI library, but it comes with a large build size at about ~45kB. For a library, we couldn't justify that cost to interact with the DOM. Fortunately, a lightweight alternative exists to solve this exact problem &mdash; preact.js, weighing in at only 3kB with a declared goal of keeping the API in line with React makes it an ideal choice for libraries. Although performance wasn't an issue because we only instantiated a handful of DOM nodes, it is interesting to note some performance benefits gained from the more "bare metal" approach preact takes with regards to the DOM:
 * JS Framework Benchmark Interactive Results
 * React vs. Preact

We build our app with Webpack transpiled with babel to target our supported browsers list. The library exports one function that can be invoked to kick off the process of showing the modal, or calling the appropriate callbacks if the user has already accepted or rejected tracking. The library is built using webpack libraryTarget: "umd" option, so it should be usable in any of our projects.

To style the modal as a library we used CSS Modules. CSS Modules are great for libraries to alleviate any conflicts as the class name is hashed and mapped to its own unique styles. Global scope is removed and the only conflicts that remain are generic selectors on nodes.

When rendered on the page, it will result in the following output with unique class names:

We stumbled through a couple unique cookie related bugs. Unsurprisingly, Internet Explorer reared its ugly head. We used js-cookie to interact with cookies with a simple API. Quickly scanning the docs you can see several notes about Internet Explorer. To sidestep the issues we never delete the cookies and instead change the value. We also ran into an issue caused by the way the  header sets cookies compared to how we set cookies on the client. When there are sub domains one has to be very precise in setting the proper  value on a cookie.

Testing
To accomplish our testing goals we wanted to have full coverage with unit tests plus automated regression and integration tests on all of the browsers we support:



Unit Testing
Mocha is one of the most popular testing frameworks and by far the most flexible. We took the most simple assert style with Chai to write our tests. Karma handles the orchestration of these libraries and generates reports for coverage and tests failures through a simple report using junit and a more visual report using allure, Both of these reports play nicely with Jenkins to see tests results over periods of times and versions. We use jsdom to emulate the DOM for our unit tests. One issue cropped up regarding some global state. We have some special logic for certain domains. JSDOM with Karma didn't provide a simple way to modify  once booted up. This forced us to write our code in a better pattern that allowed it to be more testable so this deficiency can be called a win.

Integration & Regression Testing


BrowserStack has a great suite of offerings for any open source project. Prior to deploying a new version we run regression tests on 10 devices including iPhones, Android, Windows, and Macs. BrowserStack records videos of each test for excellent debugging options. When configuring the Selenium tests, we used webdriver.io as a testrunner which made it easy to interface with BrowserStack/Selenium. BrowserStack allows you to run network requests through your local machine allowing a nearly zero setup for working with a local project.

Publishing with NPM
We publish our library on NPM to make it easier for other teams to quickly integrate it across the various front ends at FANDOM. The package is published under our organization and is open source so we can demonstrate our honest attempt at following the hazy guidelines provided in the GDPR legislation.

To deploy an update to the library, we first trigger a Jenkins job. This first builds the app, deploys a demo application to a Kubernetes instance so that BrowserStack has a location to run the Selenium tests. If our suite of tests pass, the version number is automatically bumped and then published to NPM. The appropriate Slack channel is notified as well so all clients are aware of updates.