Planeta GNOME Hispano
La actividad Hispana de GNOME 24 x 7

30 de March de 2020

How to create border maps for your projects

When working with data projects it is usual to use administrative maps. In my experience is not trivial to find the cartographic files as open access or opensource data sources, so after some search I found a method to create an ad-hoc map for any administrative region coded into OpenStreetMap. It’s not a trivial method but it is no as complex as it seems at first sight. I’ll try to introduce the essential concepts to easy understand the recipe. If you know other methods as good or better than this please give me some feedback.

I used this method with geodata for Spain so I guess it works with any other administrative region coded in OSM.

First you need to know an OpenStreetMap concept: the relation. In our case we’ll use multipolygon relations, used to code the borders of areas of our interests. The important thing to remember here is you are going to use an OSM relation.

Second you’ll want to select the region of your interest and you’ll need to figure out how it has been mapped in OSM. So you need to find the related OSM relation. As example I’ll use Alamedilla, my parents’ town in the province of Granada, Spain.

the method

Go to https://www.openstreetmap.org and search for the region of your interest. For example Alamedilla:

example screenshot

Click to the correct place and you’ll see something like this:

example screenshot

Look at the URL box at the browser and you’ll see something like this: https://www.openstreetmap.org/relation/343442. The code number you need for the next steps is that one in the URL after the relation keyword. In this example is 343442.

Then visit the to overpass turbo service, a powerful web-based data query and filtering tool for OpenStreetMap:

example screenshot

The white box at left is where you write the code of your query for Overpass. You have a wizard tool in the menu but it’s not trivial too. Instead you could copy exactly this code:


[out:json][timeout:2500];
(
    relation(343442)({{bbox}});
);
out body;
>;
out skel qt;

example screenshot

In your case you need to change the 349044 number (used for the Alamedilla’s example) with the relation number you got before. If you modify the query keep in mind the default timeout (25) maybe is not enough for your case.

Now, clicking the Run button you’ll execute your query. Keep in mind the resulting data set could be really big, depending how big the area is.

So, here it is:

example screenshot

Zoom the map to have a better view:

example screenshot

Now you’ll find the resulting data set in GeoJSON format ready at the Data tab (right side). If this format is fine for you you are done. But if you need some other you are lucky enough because when clicking into Export button you’ll find some other formats to export: GPX, KML and OSM data.

In this example we’ll use the KML format used by Google Earth, Maps and many others.

example screenshot

importing into Google Earth

Open Google Earth:

example screenshot

and open our kml file: [File][Open]:

example screenshot

and here it is:

example screenshot

Note: I modified the color (at the object properties) to make it more visible in the screenshot.

So, it is done. Now you can use the kml file in your application, import to any GIS software or convert to another format if required.

importing into Google Maps

Go to Google MyMaps and create a new one. Import a new layer an select your kml file:

example screenshot

Here it is:

example screenshot

conclusion

Now you are able to create maps of any region added into OpenStreetMap, export them to any of the said formats and import into your applications. Hope this helps.

If you finally use data from the OSM project remember to add the correct credits:

We require that you use the credit “© OpenStreetMap contributors”.

See credit details at osm.org/copyright.

This is an example of how AWESOME OpenStreetMap is the and extraordinaire work these people does. Big thanks to all of the contributros for these impressive service.

28 de March de 2020

#GeratuEtxean: HackIt! Level 3

Para terminar, un clásico.

Un ELF para arm64 con mensaje para w0pr / Ramandi incluido 🙂

Abrimos con Ghidra y vemos que hay una función encargada de pedirnos 16 caracteres y comprobar que forman una key correcta.

Toca generar el programa que revierta las comprobaciones… Pero antes habrá que arreglar ese monstruo de código del descompilador…

$ file cambridge_technology
cambridge_technology: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, stripped

$ uname -a
Linux ip.ec2.internal 5.0.10-300.fc30.aarch64 #1 SMP Tue Apr 30 16:06:13 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux

$ ./cambridge_technology
Password: 12312312313123131
FAIL!

w0pr (@abeaumont en concreto) fue el primer y único equipo que consiguió hacer ingeniería inversa de ese monstruo ¯\_(ツ)_/¯ Alfredo ha publicado el write-up, merece la pena leerlo (y replicarlo) con detalle.

#GeratuEtxean: HackIt! Level 2

Nuestros espias han localizado la flag de este nivel, pero… ¡Alguien ha destruido la caja de password! Afortunadamente, nos han proporcionado instrucciones para enviarla, pero no acabamos de entenderlas… ¿Nos ayudas?

— BEGIN TRANSMISSION —
PAYLOAD FOUND ON trololo@54.171.128.20:34342
ACCESS b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACD48tA2UHkNwf1gjzFoefbSiiA3s0+FIYWYOlxHuwDAVAAAALhSf19rUn9f awAAAAtzc2gtZWQyNTUxOQAAACD48tA2UHkNwf1gjzFoefbSiiA3s0+FIYWYOlxHuwDAVA AAAECeukBbUT2Vlozfd98BRRvKGCFRc0mdvRhAItlDfp1U7vjy0DZQeQ3B/WCPMWh59tKK IDezT4UhhZg6XEe7AMBUAAAALnJvb3RAaXAtMTcyLTMxLTYtNjUuZXUtd2VzdC0xLmNvbX B1dGUuaW50ZXJuYWwBAgMEBQYH
IDENTIFY USING 20ce8a7cc776a39ad291d4648e3e39ae.hax
SEND FLAG AS TEXT
— END TRANSMISSION —

Si nos intentamos conectar por ssh trololo@54.171.128.20 -p 34342 vemos que el servidor responde, pidiendo clave pública. Descodificando el string de ACCESS de base64, vemos una referencia a una máquina remota (AWS) y esta keyword: openssh-key-v1nonenone3

Haciendo pruebas, detectamos que la clave sigue el algoritmo DSA (no RSA). La preparamos:

$ cat id_dsa
—–BEGIN OPENSSH PRIVATE KEY—–
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACD48tA2UHkNwf1gjzFoefbSiiA3s0+FIYWYOlxHuwDAVAAAALhSf19rUn9f
awAAAAtzc2gtZWQyNTUxOQAAACD48tA2UHkNwf1gjzFoefbSiiA3s0+FIYWYOlxHuwDAVA
AAAECeukBbUT2Vlozfd98BRRvKGCFRc0mdvRhAItlDfp1U7vjy0DZQeQ3B/WCPMWh59tKK
IDezT4UhhZg6XEe7AMBUAAAALnJvb3RAaXAtMTcyLTMxLTYtNjUuZXUtd2VzdC0xLmNvbX
B1dGUuaW50ZXJuYWwBAgMEBQYH
—–END OPENSSH PRIVATE KEY—–

Y conectamos:

$ ssh -i id_dsa -p 34342 trololo@54.171.128.20
Last login: Sat Mar 28 14:34:31 2020 from xxxxxxx
-sh-4.2$

Es una shell restringida (ni ls, ni gaitas). Además, si intentas borrar algo, te añade un espacio en blanco… En fin… El autocompletamiento con tabulador funciona. Vemos que existen algunos directorios, entre ellos /bin. Añadimos /bin al PATH y disponemos de cat, dig, ls y nsupdate.

El comando dig nos da una pista…. recordemos también que no hemos usado esta parte del enunciado:

IDENTIFY USING 20ce8a7cc776a39ad291d4648e3e39ae.hax

Investigando un poco vemos que existe un proyecto con TLD .hax para DNS dinámicos. La otra pista «SEND FLAG AS TEXT » nos parece indicar que necesitamos crear un registro TXT con la Flag. ¿Pero dónde está la flag? Bueno, no había muchos directorios en la máquina restringida donde nos encontramos, así que rastreando carpetas nos encontramos con /var/tmp/secret,

-sh-4.2$ cat /var/tmp/secret
Is0latI0nFl4w3dNetW0rkz!

Pedimos un dig del TLD hax:

dig -t txt 20ce8a7cc776a39ad291d4648e3e39ae.hax

y vemos que vamos bien:

-sh-4.2$ dig -t txt 20ce8a7cc776a39ad291d4648e3e39ae.hax

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.amzn2.0.2 <<>> -t txt 20ce8a7cc776a39ad291d4648e3e39ae.hax
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12895
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;20ce8a7cc776a39ad291d4648e3e39ae.hax. IN TXT

;; AUTHORITY SECTION:
hax. 60 IN NS ns1.hax.

;; ADDITIONAL SECTION:
ns1.hax. 60 IN A 127.0.0.1

;; Query time: 0 msec
;; SERVER: 172.31.6.65#53(172.31.6.65)
;; WHEN: Sat Mar 28 16:41:53 UTC 2020
;; MSG SIZE rcvd: 136

Nos queda hacer el nsupdate … pero antes, veamos quién resuelve los DNS en esta máquina:

-sh-4.2$ cat /etc/resolv.conf
nameserver 172.31.6.65

$ nsupdate

server 172.31.6.65

update add 20ce8a7cc776a39ad291d4648e3e39ae.hax. 300 TXT «Is0latI0nFl4w3dNetW0rkz!»

send

Y listo! Pulsamos en el botón que nos ha preparado la ORG en ese level y pasamos de nivel

#GeratuEtxean: HackIt! Level 1

Esta edición de la GipuzkoaEncounter ha sido un tanto extraña. Confinados en casa, hemos intentado llevarlo lo mejor posible. Aunque no es para nada lo mismo y todos preferimos la presencial, esta edición online no ha estado nada mal. Y para no perder las buenas costumbres, vayamos con el write-up.

¡Ayuda! ¡Nos han hackeado la cuenta de Discord! Parece que han hecho cosas raras. Hemos conseguido un log de lo que han hecho…

Nos pasan un fichero .har (un dump de una conexión HTTP). Aunque internamente es simplemente un enorme fichero JSON, lo mejor para visualizarlo de forma rápida es abrirlo directamente con el editor de Chrome DevTools, pestaña Network. Importamos el fichero HAR para echarle un vistazo y vemos lo siguiente.

Podemos ver puntos verdes en la línea del timeline, donde van varios POST

Seleccionando uno de esos puntos:

Mensajes POST hacia el servidor de Discord. Se ve que el usuario está tecleando contenido («0ur53»)

El problema es que no vale con recorrer esos mensajes de tipo POST. Hay una complicación: el autor del reto ha editado algunos trozos (mensajes con método HTTP PATCH) e incluso borrado otros (mensajes DELETE).

Solucionamos con un pequeño script en nodejs

08 de February de 2020

Xamarin forks and whatnots

Busy days in geewallet world! I just released version 0.4.2.198 which brings some interesting fixes, but I wanted to talk about the internal work that had to be done for each of them, in case you're interested.
  • In Linux(GTK), cold storage mode when pairing was broken, because the absence of internet connection was not being detected properly. The bug was in a 3rd-party nuget library we were using: Xam.Plugin.Connectivity. But we couldn't migrate to Xamarin.Essentials for this feature because Xamarin.Essentials lacks support for some platforms that we already supported (not officially, but we know geewallet worked on them even if we haven't released binaries/packages for all of them yet). The solution? We forked Xamarin.Essentials to include support for these platforms (macOS and Linux), fixed the bug in our fork, and published our fork in nuget under the name `DotNetEssentials`. Whenever Xamarin.Essentials starts supporting these platforms, we will stop using our fork.
  • The clipboard functionality in geewallet depended on another 3rd-party nuget library: Xamarin.Plugins.Clipboard. The GTK bits of this were actually contributed by me to their github repository as a Pull Request some time ago, so we just packaged the same code to include it in our new DotNetEssentials fork. One dependency less to care about!
  • Xamarin.Forms had a strange bug that caused some buttons sometimes to not be re-enabled. This bug has been fixed by one of our developers and its fix was included in the new pre-release of Xamarin.Forms 4.5, so we have upgraded geewallet to use this new version instead of v4.3.
Last but not least, I wanted to mention something not strictly related to this new release. We got accepted in GNOME's gitlab, so now the canonical place to find our repository is here:


Technically speaking, this will allow us to drop GithubActions for our non-Linux CI lanes (we were already happy with our gitlab-ci recipes before we had to use GH! only limitation was that GitLab.com's free tier CI minutes for agents was Linux-only).

But in general we're just happy to be hosted by infrastructure from one of the most important opensource projects in the world. Special thanks go to Carlos Soriano and Andrea Veri for helping us with the migration.

PS: Apologies if the previous blogpost to this one shows up in planets again, as it might be a side-effect of updating its links to point to the new git repo!

Introducing geewallet

Version 0.4.2.187 of geewallet has just been published to the snap store! You can install it by looking for its name in the store or by installing it from the command line with `snap install geewallet`. It features a very simplistic and minimalistic UI/UX. Nothing very fancy, especially because it has a single codebase that targets many (potential) platforms, e.g. you can also find it in the Android App Store.

What was my motivation to create geewallet in the first place, around 2 years ago? Well, I was very excited about the “global computing platform” that Ethereum was promising. At the time, I thought it would be like the best replacement of Namecoin: decentralised naming system, but not just focusing on this aspect, but just bringing Turing-completeness so that you can build whatever you want on top of it, not just a key-value store. So then, I got ahold of some ethers to play with the platform. But by then, I didn’t find any wallet that I liked, especially when considering security. Most people were copy+pasting their private keys into a website (!) called MyEtherWallet. Not only this idea was terrifying (since you had to trust not just the security skills of the sysadmin who was in charge of the domain&server, but also that the developers of the software don’t turn rogue…), it was even worse than that, it was worse than using a normal hot wallet. And what I wanted was actually a cold wallet, a wallet that could run in an offline device, to make sure hacking it would be impossible (not faraday-cage-impossible, but reasonably impossible).

So there I did it, I created my own wallet.

After some weeks, I added bitcoin support on it thanks to the library NBitcoin (good work Nicholas!). After some months, I added a cross-platform UI besides the first archaic command-line frontend. These days it looks like this:



What was my motivation to make geewallet a brain wallet? Well, at the time (and maybe nowadays too, before I unveil this project at least), the only decent brain wallet out there that seemed sufficiently secure (against brute force attacks) was WarpWallet, from the Keybase company. If you don’t believe in their approach, they even have placed a bounty in a decently small passphrase (so if you ever think that this kind of wallet would be hacked, you would be certainly safe to think that any cracker would target this bounty first, before thinking of you). The worst of it, again, was that to be able to use it you had again to use a web interface, so you had the double-trust problem again. Now geewallet brings the same WarpWallet seed generation algorithm (backed by unit tests of course) but on a desktop/mobile approach, so that you can own the hardware where the seed is generated. No need to write anymore long seeds of random words in pieces of paper: your mind is the limit! (And of course geewallet will warn the user in case the passphrase is too short and simple: it even detects if all the words belong to the dictionary, to deter low entropy, from the human perspective.)

Why did I add support for Litecoin and Ethereum Classic to the wallet? First, let me tell you that bitcoin and ethereum, as technological innovations and network effects, are very difficult to beat. And in fact, I’m not a fan of the proliferation of dubious portrayed awesome new coins/tokens that claim to be as efficient and scalable as these first two. They would need not only to beat the network effect when it comes to users, but also developers (all the best cryptographers are working in Bitcoin and Ethereum technologies). However, Litecoin and Ethereum-Classic are so similar to Bitcoin and Ethereum, respectively, that adding support for them was less than a day’s work. And they are not completely irrelevant: Litecoin may bring zero-knowledge proofs in an upcoming update soon (plus, its fees are lower today, so it’s an alternative cheaper testnet with real value); and Ethereum-Classic has some inherent characteristics that may make it more decentralised than Ethereum in the long run (governance not following any cult of personality, plus it will remain as a Turing-complete platform on top of Proof Of Work, instead of switching to Proof of Stake; to understand why this is important, I recommend you to watch this video).

Another good reason of why I started something like this from scratch is because I wanted to use F# in a real open source project. I had been playing with it for a personal (private) project 2 years before starting this one, so I wanted to show the world that you can build a decent desktop app with simple and not too opinionated/academic functional programming. It reuses all the power of the .NET platform: you get debuggers, you can target mobile devices, you get immutability by default; all three in one, in this decade, at last. (BTW, everything is written in F#, even the build scripts.)

What’s the roadmap of geewallet? The most important topics I want to cover shortly are three:
  • Make it even more user friendly: blockchain addresses are akin to the numeric IP addresses of the early 80s when DNS still didn’t exist. We plan to use either ENS or IPNS or BNS or OpenCAP so that people can identify recipients much more easily.
  • Implement Layer2 technologies: we’re already past the proof of concept phase. We have branches that can open channels. The promise of these technologies is instantaneous transactions (no waits!) and ridiculous (if not free) fees.
  • Switch the GTK Xamarin.Forms driver to work with the new “GtkSharp” binding under the hood, which doesn’t require glue libraries. (I’ve had quite a few nightmares with native dependencies/libs when building the sandboxed snap package!)
With less priority:
  • Integrate with some Rust projects: MimbleWimble(Grin) lib, the distributed COMIT project for trustless atomic swaps, or other Layer2-related ones such as rust-lightning.
  • Cryptography work: threshold keys or deniable encryption (think "duress" passwords).
  • NFC support (find recipients without QR codes!).
  • Tizen support (watches!).
  • Acceptance testing via UI Selenium tests (look up the Uno Platform).

Areas where I would love contributions from the community:
  • Flatpak support: unfortunately I haven’t had time to look at this sandboxing technology, but it shouldn’t be too hard to do, especially considering that there’s already a Mono-based project that supports it: SparkleShare.
  • Ubuntu packaging: there’s a patch blocked on some Ubuntu bug that makes the wallet (or any .NET app these days, as it affects the .NET package manager: nuget) not build in Ubuntu 19.10. If this patch is not merged soon, the next LTS of Ubuntu will have this bug :( As far as I understand, what needs to be solved is this issue so that the latest hotfixes are bundled. (BTW I have to thank Timotheus Pokorra, the person in charge to package Mono in Fedora, for his help on this matter so far.)
  • GNOME community: I’m in search for a home for this project. I don’t like that it lives in my GitLab username, because it’s not easy to find. One of the reasons I’ve used GitLab is because I love the fact that being open source, many communities are adopting this infrastructure, like Debian and GNOME. That’s why I’ve used as a bug tracker, for merge requests and to run CI jobs. This means that it should be easy to migrate to GNOME’s GitLab, isn’t it? There are unmaintained projects (e.g. banshee, which I couldn’t continue maintaining due to changes in life priorities...) already hosted there, so maybe it’s not much to ask if I could host a maintained one? It's probably the first Gtk-based wallet out there.

And just in case I wasn't clear:
  • Please don’t ask me to add support for your favourite %coin% or <token>.

  • If you want to contribute, don’t ask me what to work on, just think of your personal itch you want to scratch and discuss it with me filing a GitLab issue. If you’re a C# developer, I wrote a quick F# tutorial for you.
  • Thanks for reading up until here! It’s my pleasure to write about this project.


  • I'm excited about the world of private-key management. I think we can do much better than what we have today: most people think of hardware wallets to be unhackable or cold storage, but most of them are used via USB or Bluetooth! Which means they are not actually cold storage, so software wallets with offline-support (also called air-gapped) are more secure! I think that eventually these tools will even merge with other ubiquitous tools with which we’re more familiar today: password managers!

    You can follow the project on twitter (yes I promise I will start using this platform to publish updates).

    PS: If you're still not convinced about these technologies or if you didn't understand that PoW video I posted earlier, I recommend you to go back to basics by watching this other video produced by a mathematician educator which explains it really well.

    WORA-WNLF


    I started my career writing web applications. I had struggles with PHP web-frameworks, javascript libraries, and rendering differences (CSS and non-CSS glitches) across browsers. After leaving that world, I started focusing more on the backend side of things, fleeing from the frontend camp (mainly actually just scared of that abomination that was javascript; because, in my spare time, I still did things with frontends: I hacked on a GTK media player called Banshee and a GTK chat app called Smuxi).

    So there you had me: a backend dev by day, desktop dev by night. But in the GTK world I had similar struggles as the ones I had as a frontend dev when the browsers wouldn’t behave in the same way. I’m talking about GTK bugs in other non-Linux OSs, i.e. Mac and Windows.

    See, I wanted to bring a desktop app to the masses, but these problems (and others of different kinds) prevented me to do it. And while all this was happening, another major shift was happening as well: desktop environments were fading while mobile (and not so mobile: tablets!) platforms were rising in usage. This meant yet more platforms that I wished GTK supported. As I’m not a C language expert (nor I wanted to be), I kept googling for the terms “gtk” and “android” or “gtk” and “iOS”, to see if some hacker put something together that I could use. But that day never happened.

    Plus, I started noticing a trend: big companies with important mobile apps started to stop using HTML5 within their apps in favour of native apps, mainly chasing the “native look & feel”. This meant, clearly, that even if someone cooked a hack that made gtk+ run in Android, it would still feel foreign, and nobody would dare to use it.

    So I started to become a fan of abstraction layers that were a common denominator of different native toolkits and kept their native look&feel. For example, XWT, the widget toolkit that Mono uses in MonoDevelop to target all 3 toolkits depending on the platform: Cocoa (on macOS), Gtk (on Linux) and WPF (on Windows). Pretty cool hack if you ask me. But using this would contradict my desires of using a toolkit that would already support Android!

    And there it was Xamarin.Forms, an abstraction layer between iOS, Android and WindowsPhone, but that didn’t support desktops. Plus, at the time, Xamarin was proprietary (and I didn’t want to get out of my open source world). It was a big dilemma.

    But then, some years passed, and many events happened around Xamarin.Forms:
    • Xamarin (the company) was bought by Microsoft and, at the same time, Xamarin (the product) was open sourced.
    • Xamarin.Forms is opensource now (TBH not sure if it was proprietary before, or it was always opensource).
    • Xamarin.Forms started supporting macOS and Windows UWP.
    • Xamarin.Forms 3.0 included support for GTK and WPF.

    So that was the last straw that made me switch completely all my desktop efforts toward Xamarin.Forms. Not only I can still target Linux+GTK (my favorite platform), I can also make my apps run in mobile platforms, and desktop OSs that most people use. So both my niche and mainstream covered! But this is not the end: Xamarin.Forms has been recently ported to Tizen too! (A Linux-based OS used by Samsung in SmartTVs and watches.)

    Now let me ask you something. Do you know of any graphical toolkit that allows you to target 6 different platforms with the same codebase? I repeat: Linux(GTK), Windows(UWP/WPF), macOS, iOS, Android, Tizen. The old Java saying is finally here! (but for the frontend side): “write once, run anywhere” (WORA) to which I add “with native look’n’feel” (WORA-WNLF)

    If you want to know who is the hero that made the GTK driver of Xamarin.Forms, follow @jsuarezruiz which BTW has been recently hired by Microsoft to work on their non-Windows IDE ;-)

    PS: If you like .NET and GTK, my employer is also hiring! (remote positions might be available too) ping me 

    23 de December de 2019

    End of the year Update: 2019 edition

    It’s the end of December and it seems that yet another year has gone by, so I figured that I’d write an EOY update to summarize my main work at Igalia as part of our Chromium team, as my humble attempt to make up for the lack of posts in this blog during this year.

    I did quit a few things this year, but for the purpose of this blog post I’ll focus on what I consider the most relevant ones: work on the Servicification and the Blink Onion Soup projects, the migration to the new Mojo APIs and the BrowserInterfaceBroker, as well as a summary of the conferences I attended, both as a regular attendee and a speaker.

    But enough of an introduction, let’s dive now into the gory details…

    Servicification: migration to the Identity service

    As explained in my previous post from January, I’ve started this year working on the Chromium Servicification (s13n) Project. More specifically, I joined my team mates in helping with the migration to the Identity service by updating consumers of several classes from the sign-in component to ensure they now use the new IdentityManager API instead of directly accessing those other lower level APIs.

    This was important because at some point the Identity Service will run in a separate process, and a precondition for that to happen is that all access to sign-in related functionality would have to go through the IdentityManager, so that other process can communicate with it directly via Mojo interfaces exposed by the Identity service.

    I’ve already talked long enough in my previous post, so please take a look in there if you want to know more details on what that work was exactly about.

    The Blink Onion Soup project

    Interestingly enough, a bit after finishing up working on the Identity service, our team dived deep into helping with another Chromium project that shared at least one of the goals of the s13n project: to improve the health of Chromium’s massive codebase. The project is code-named Blink Onion Soup and its main goal is, as described in the original design document from 2015, to “simplify the codebase, empower developers to implement features that run faster, and remove hurdles for developers interfacing with the rest of the Chromium”. There’s also a nice slide deck from 2016’s BlinkOn 6 that explains the idea in a more visual way, if you’re interested.


    “Layers”, by Robert Couse-Baker (CC BY 2.0)

    In a nutshell, the main idea is to simplify the codebase by removing/reducing the several layers of located between Chromium and Blink that were necessary back in the day, before Blink was forked out of WebKit, to support different embedders with their particular needs (e.g. Epiphany, Chromium, Safari…). Those layers made sense back then but these days Blink’s only embedder is Chromium’s content module, which is the module that Chrome and other Chromium-based browsers embed to leverage Chromium’s implementation of the Web Platform, and also where the multi-process and sandboxing architecture is implemented.

    And in order to implement the multi-process model, the content module is split in two main parts running in separate processes, which communicate among each other over IPC mechanisms: //content/browser, which represents the “browser process” that you embed in your application via the Content API, and //content/renderer, which represents the “renderer process” that internally runs the web engine’s logic, that is, Blink.

    With this in mind, the initial version of the Blink Onion Soup project (aka “Onion Soup 1.0”) project was born about 4 years ago and the folks spearheading this proposal started working on a 3-way plan to implement their vision, which can be summarized as follows:

    1. Migrate usage of Chromium’s legacy IPC to the new IPC mechanism called Mojo.
    2. Move as much functionality as possible from //content/renderer down into Blink itself.
    3. Slim down Blink’s public APIs by removing classes/enums unused outside of Blink.

    Three clear steps, but definitely not easy ones as you can imagine. First of all, if we were to remove levels of indirection between //content/renderer and Blink as well as to slim down Blink’s public APIs as much as possible, a precondition for that would be to allow direct communication between the browser process and Blink itself, right?

    In other words, if you need your browser process to communicate with Blink for some specific purpose (e.g. reacting in a visual way to a Push Notification), it would certainly be sub-optimal to have something like this:

    …and yet that is what would happen if we kept using Chromium’s legacy IPC which, unlike Mojo, doesn’t allow us to communicate with Blink directly from //content/browser, meaning that we’d need to go first through //content/renderer and then navigate through different layers to move between there and Blink itself.

    In contrast, using Mojo would allow us to have Blink implement those remote services internally and then publicly declare the relevant Mojo interfaces so that other processes can interact with them without going through extra layers. Thus, doing that kind of migration would ultimately allow us to end up with something like this:

    …which looks nicer indeed, since now it is possible to communicate directly with Blink, where the remote service would be implemented (either in its core or in a module). Besides, it would no longer be necessary to consume Blink’s public API from //content/renderer, nor the other way around, enabling us to remove some code.

    However, we can’t simply ignore some stuff that lives in //content/renderer implementing part of the original logic so, before we can get to the lovely simplification shown above, we would likely need to move some logic from //content/renderer right into Blink, which is what the second bullet point of the list above is about. Unfortunately, this is not always possible but, whenever it is an option, the job here would be to figure out what of that logic in //content/renderer is really needed and then figure out how to move it into Blink, likely removing some code along the way.

    This particular step is what we commonly call “Onion Soup’ing //content/renderer/<feature>(not entirely sure “Onion Soup” is a verb in English, though…) and this is for instance how things looked before (left) and after (right) Onion Souping a feature I worked on myself: Chromium’s implementation of the Push API:


    Onion Soup’ing //content/renderer/push_messaging

    Note how the whole design got quite simplified moving from the left to the right side? Well, that’s because some abstract classes declared in Blink’s public API and implemented in //content/renderer (e.g. WebPushProvider, WebPushMessagingClient) are no longer needed now that those implementations got moved into Blink (i.e. PushProvider and PushMessagingClient), meaning that we can now finally remove them.

    Of course, there were also cases where we found some public APIs in Blink that were not used anywhere, as well as cases where they were only being used inside of Blink itself, perhaps because nobody noticed when that happened at some point in the past due to some other refactoring. In those cases the task was easier, as we would just remove them from the public API, if completely unused, or move them into Blink if still needed there, so that they are no longer exposed to a content module that no longer cares about that.

    Now, trying to provide a high-level overview of what our team “Onion Soup’ed” this year, I think I can say with confidence that we migrated (or helped migrate) more than 10 different modules like the one I mentioned above, such as android/, appcache/, media/stream/, media/webrtc, push_messaging/ and webdatabase/, among others. You can see the full list with all the modules migrated during the lifetime of this project in the spreadsheet tracking the Onion Soup efforts.

    In my particular case, I “Onion Soup’ed” the PushMessagingWebDatabase and SurroundingText features, which was a fairly complete exercise as it involved working on all the 3 bullet points: migrating to Mojo, moving logic from //content/renderer to Blink and removing unused classes from Blink’s public API.

    And as for slimming down Blink’s public API, I can tell that we helped get to a point where more than 125 classes/enums were removed from that Blink’s public APIs, simplifying and reducing the Chromium code- base along the way, as you can check in this other spreadsheet that tracked that particular piece of work.

    But we’re not done yet! While overall progress for the Onion Soup 1.0 project is around 90% right now, there are still a few more modules that require “Onion Soup’ing”, among which we’ll be tackling media/ (already WIP) and accessibility/ (starting in 2020), so there’s quite some more work to be done on that regard.

    Also, there is a newer design document for the so-called Onion Soup 2.0 project that contains some tasks that we have been already working on for a while, such as “Finish Onion Soup 1.0”, “Slim down Blink public APIs”, “Switch Mojo to new syntax” and “Convert legacy IPC in //content to Mojo”, so definitely not done yet. Good news here, though: some of those tasks are already quite advanced already, and in the particular case of the migration to the new Mojo syntax it’s nearly done by now, which is precisely what I’m talking about next…

    Migration to the new Mojo APIs and the BrowserInterfaceBroker

    Along with working on “Onion Soup’ing” some features, a big chunk of my time this year went also into this other task from the Onion Soup 2.0 project, where I was lucky enough again not to be alone, but accompanied by several of my team mates from Igalia‘s Chromium team.

    This was a massive task where we worked hard to migrate all of Chromium’s codebase to the new Mojo APIs that were introduced a few months back, with the idea of getting Blink updated first and then having everything else migrated by the end of the year.


    Progress of migrations to the new Mojo syntax: June 1st – Dec 23rd, 2019

    But first things first: you might be wondering what was wrong with the “old” Mojo APIs since, after all, Mojo is the new thing we were migrating to from Chromium’s legacy API, right?

    Well, as it turns out, the previous APIs had a few problems that were causing some confusion due to not providing the most intuitive type names (e.g. what is an InterfacePtrInfo anyway?), as well as being quite error-prone since the old types were not as strict as the new ones enforcing certain conditions that should not happen (e.g. trying to bind an already-bound endpoint shouldn’t be allowed). In the Mojo Bindings Conversion Cheatsheet you can find an exhaustive list of cases that needed to be considered, in case you want to know more details about these type of migrations.

    Now, as a consequence of this additional complexity, the task wouldn’t be as simple as a “search & replace” operation because, while moving from old to new code, it would often be necessary to fix situations where the old code was working fine just because it was relying on some constraints not being checked. And if you top that up with the fact that there were, literally, thousands of lines in the Chromium codebase using the old types, then you’ll see why this was a massive task to take on.

    Fortunately, after a few months of hard work done by our Chromium team, we can proudly say that we have nearly finished this task, which involved more than 1100 patches landed upstream after combining the patches that migrated the types inside Blink (see bug 978694) with those that tackled the rest of the Chromium repository (see bug 955171).

    And by “nearly finished” I mean an overall progress of 99.21% according to the Migration to new mojo types spreadsheet where we track this effort, where Blink and //content have been fully migrated, and all the other directories, aggregated together, are at 98.64%, not bad!

    On this regard, I’ve been also sending a bi-weekly status report mail to the chromium-mojo and platform-architecture-dev mailing lists for a while (see the latest report here), so make sure to subscribe there if you’re interested, even though those reports might not last much longer!

    Now, back with our feet on the ground, the main roadblock at the moment preventing us from reaching 100% is //components/arc, whose migration needs to be agreed with the folks maintaining a copy of Chromium’s ARC mojo files for Android and ChromeOS. This is currently under discussion (see chromium-mojo ML and bug 1035484) and so I’m confident it will be something we’ll hopefully be able to achieve early next year.

    Finally, and still related to this Mojo migrations, my colleague Shin and I took a “little detour” while working on this migration and focused for a while in the more specific task of migrating uses of Chromium’s InterfaceProvider to the new BrowserInterfaceBroker class. And while this was not a task as massive as the other migration, it was also very important because, besides fixing some problems inherent to the old InterfaceProvider API, it also blocked the migration to the new mojo types as InterfaceProvider did usually rely on the old types!


    Architecture of the BrowserInterfaceBroker

    Good news here as well, though: after having the two of us working on this task for a few weeks, we can proudly say that, today, we have finished all the 132 migrations that were needed and are now in the process of doing some after-the-job cleanup operations that will remove even more code from the repository! \o/

    Attendance to conferences

    This year was particularly busy for me in terms of conferences, as I did travel to a few events both as an attendee and a speaker. So, here’s a summary about that as well:

    As usual, I started the year attending one of my favourite conferences of the year by going to FOSDEM 2019 in Brussels. And even though I didn’t have any talk to present in there, I did enjoy my visit like every year I go there. Being able to meet so many people and being able to attend such an impressive amount of interesting talks over the weekend while having some beers and chocolate is always great!

    Next stop was Toronto, Canada, where I attended BlinkOn 10 on April 9th & 10th. I was honoured to have a chance to present a summary of the contributions that Igalia made to the Chromium Open Source project in the 12 months before the event, which was a rewarding experience but also quite an intense one, because it was a lightning talk and I had to go through all the ~10 slides in a bit under 3 minutes! Slides are here and there is also a video of the talk, in case you want to check how crazy that was.

    Took a bit of a rest from conferences over the summer and then attended, also as usual, the Web Engines Hackfest that we at Igalia have been organising every single year since 2009. Didn’t have a presentation this time, but still it was a blast to attend it once again as an Igalian and celebrate the hackfest’s 10th anniversary sharing knowledge and experiences with the people who attended this year’s edition.

    Finally, I attended two conferences in the Bay Area by mid November: first one was the Chrome Dev Summit 2019 in San Francisco on Nov 11-12, and the second one was BlinkOn 11 in Sunnyvale on Nov 14-15. It was my first time at the Chrome Dev Summit and I have to say I was fairly impressed by the event, how it was organised and the quality of the talks in there. It was also great for me, as a browsers developer, to see first hand what are the things web developers are more & less excited about, what’s coming next… and to get to meet people I would have never had a chance to meet in other events.

    As for BlinkOn 11, I presented a 30 min talk about our work on the Onion Soup project, the Mojo migrations and improving Chromium’s code health in general, along with my colleague Antonio Gomes. It was basically a “extended” version of this post where we went not only through the tasks I was personally involved with, but also talked about other tasks that other members of our team worked on during this year, which include way many other things! Feel free to check out the slides here, as well as the video of the talk.

    Wrapping Up

    As you might have guessed, 2019 has been a pretty exciting and busy year for me work-wise, but the most interesting bit in my opinion is that what I mentioned here was just the tip of the iceberg… many other things happened in the personal side of things, starting with the fact that this was the year that we consolidated our return to Spain after 6 years living abroad, for instance.

    Also, and getting back to work-related stuff here again, this year I also became accepted back at Igalia‘s Assembly after having re-joined this amazing company back in September 2018 after a 6-year “gap” living and working in the UK which, besides being something I was very excited and happy about, also brought some more responsibilities onto my plate, as it’s natural.

    Last, I can’t finish this post without being explicitly grateful for all the people I got to interact with during this year, both at work and outside, which made my life easier and nicer at so many different levels. To all of you,  cheers!

    And to everyone else reading this… happy holidays and happy new year in advance!

    08 de December de 2019

    «Chris stared out of the window», a theological tale

    My English class. You should write an story starting with 🙶Chris stared out of the window waiting for the phone to ring🙷. Let’s do it.


    Chris stared out of the window waiting for the phone to ring. He looked into the void while his mind wandered. Time is passing, but it is not. For an eternal being all time is present, but not always is time present. The past, the future are just states of mind for an overlord. But he is still waiting for the phone to ring. Time is coming. The decision was made. It had always been made, before time existed indeed. Chris knows all the details of the plan. He knows because he is God too. He knows because he conceived it. No matter if he had been waiting for the starting signal. No matter if he expects the Holly Spirit to announce it to him. You can can call it protocol. He knows because he had decided how to do it. But Chris doubts. He is God. He is Holly Spirit. But he has been human too. The remembrance of his humanity brings him to a controversial state of mind. Now he doubts. He has been always doubting since the world is the world and before the existence of time. And after too, because he is an eternal being. He now relives the feelings of being a human. He relives all the feelings to be all the humans. He revisits joy and pain. Joy is good, sure. But joy is is nothing special for an overlord god. But pain… pain matters for a human — and Chris has been a human. Chris knows. Chris feels. Chris understands how sad human life can be. Chris knows because he is the Father creator. He created humans. He knows how mediocrity drives the character of all the creatures privileged with consciousness. A poisoned gift. He knows how evil is always just an expression of insecurities, the lack of certainty of what will happen tomorrow. What will happen just the next second. 🙶Will I be alive? Will be the pain still exists?🙷. He knows because he feels. And he doubts because he feels. He feels it can’t be fair to judge, to condemn, to punish a living being because it was created in that way. This would not be the full of love god the Evangelists announced. How could he punish for a sin he was the cause of. But, if not, can it be fair to all the others who behave according the Word. All of those, maybe with love, maybe will pain, maybe just for selfishness, fulfilled God’s proposal of virtue and goodness. How can not distinguish their works, to award their efforts. How can it be fair. How can he be good. His is the power and the glory. He is in doubt. The phone rings.

    20 de November de 2019

    Some GNOME / LAS / Wikimedia love

    For some time to know I’ve dedicating some more time to Wikimedia related activities. I love to share this time with other opensource communities I’m related to. This post is just to write down a list of items/resources I’ve created related with events in this domain.

    Wikidata

    If you don’t know about Wikidata probably you’ll look at it because it’ll be the most important linked data corpora in the world. In the future we will use WD as the cornerstone of many applications. Remember you read this here first.

    About GUADEC:

    About GNOME.Asia:

    About LAS 2019:

    And about the previous LAS format:

    Wikimedia Commons

    Wikimedia Commons is my current favorite place to publish pictures with open licensing these days. To me is the ideal place to publish reusable resources with explicit Creative Commons open licensing. And you can contribute with your own media without intermediaries.

    About GUADEC:

    About GNOME.Asia:

    About LAS:

    Epilogue

    As you can check the list is not complete neither all items are fully described. I invite you to complete all the information you want. For Wikidata there are many places where ask help. And for WikiCommons you can help uploading your own pictures. If you have doubts just use current examples as references or ask me directly.

    Linux Applications Summit 2019 activity

    Here it is a summary about my activities at the past Linux App Summit this month in Barcelona.

    My first goal has been spreading the word about the Indico conference management system. This was in part done with a lightning talk. Sadly I couldn’t show my slides but here they are for your convenience. Anyhow they are not particularly relevant.

    Also @KristiProgri asked me to help taking pictures of the event. I’m probably the worst photographer in the world and I don’t have special interest in photography as a hobby but for some months I’m trying to take documentary pictures supposedly relevant for the Wikimedia projects. The good thing is seems I’m getting better, specially since I changed to a new smartphone with it’s making magic with my pictures. Y just use a mere Moto G7+ smartphone but it’s making me really happy with the results, exceeding any of my expectations. Just to say I found the standard camera application doesn’t work well for me when photographing moving targets but I’m doing better indeed with the wonderful Open Camera Android opensource application.

    I uploaded my pictures to Category:Linux App Summit 2019. Please consider to add yours to the same category.

    Related with this I added items to Wikidata too.

    Also helped a bit sharing pics in Twitter #LinuxAppSummit:

    And finally, I helped the local team with some minor tasks like moving items and so.

    I want to congratulate all the organization team and specially the local team for the results and the love they have put in the event. The results have been excellent and this is another strong step for the interweaved relations between opensource development communities sharing very near goals.

    My participation at the conference has been sponsored by the GNOME Foundation. Thanks very much for their support.

    05 de November de 2019

    Congress/Conference organization tasks list draft

    Well, when checking my blog looking for references about resources related with conferences organization I’ve found I had any link to this thing I compiled two years ago (!!??). So this post is fixing it.

    After organizing a couple national and international conferences I compiled a set of tasks useful as an skeleton for you next conference. The list is not absolutely exhaustive neither strictly formal but it’s complete enough to be, I think, accurate and useful. In its current this task list is published at tree.taiga.io as Congress/Conference organization draft: «a simplified skeleton of a kanban project for the organization of conferences. It’s is specialized in technical and opensource activities based in real experience».

    I think the resource is still valid and useful. So feel free to use it and provide feedback.

    task list screenshot

    Now, thinking aloud, and considering my crush with EPF Composer I seriously think I should model the tasks with it as an SPEM method and publish both sources and website. And, hopefully, create tools for creating project drafts in well known tools (Gitlab, Taiga itself, etc). Reach me if you are interested too :-)

    Enjoy!

    24 de October de 2019

    VCR to WebM with GStreamer and hardware encoding

    My family had bought many years ago a Panasonic VHS video camera and we had recorded quite a lot of things, holidays, some local shows, etc. I even got paid 5000 pesetas (30€ more than 20 years ago) a couple of times to record weddings in a amateur way. Since my father passed less than a year ago I have wanted to convert those VHS tapes into something that can survive better technologically speaking.

    For the job I bought a USB 2.0 dongle and connected it to a VHS VCR through a SCART to RCA cable.

    The dongle creates a V4L2 device for video and is detected by Pulseaudio for audio. As I want to see what I am converting live I need to tee both audio and video to the corresponding sinks and the other part would go to to the encoders, muxer and filesink. The command line for that would be:

    gst-launch-1.0 matroskamux name=mux ! filesink location=/tmp/test.webm \
    v4l2src device=/dev/video2 norm=255 io-mode=mmap ! queue ! vaapipostproc ! tee name=video_t ! \
    queue ! vaapivp9enc rate-control=4 bitrate=1536 ! mux.video_0 \
    video_t. ! queue ! xvimagesink \
    pulsesrc device=alsa_input.usb-MACROSIL_AV_TO_USB2.0-02.analog-stereo ! 'audio/x-raw,rate=48000,channels=2' ! tee name=audio_t ! \
    queue ! pulsesink \
    audio_t. ! queue ! vorbisenc ! mux.audio_0

    As you can see I convert to WebM with VP9 and Vorbis. Something interesting can be passing norm=255 to the v4l2src element so it’s capturing PAL and the rate-control=4 for VBR to the vaapivp9enc element, otherwise it will use cqp as default and file size would end up being huge.

    You can see the pipeline, which is beatiful, here:

    As you can see, we’re using vaapivp9enc here which is hardware enabled and having this pipeline running in my computer was consuming more or less 20% of CPU with the CPU absolutely relaxed, leaving me the necessary computing power for my daily work. This would not be possible without GStreamer and GStreamer VAAPI plugins, which is what happens with other solutions whose instructions you can find online.

    If for some reason you can’t find vaapivp9enc in Debian, you should know there are a couple of packages for the intel drivers and that the one you should install is intel-media-va-driver. Thanks go to my colleague at Igalia Víctor Jáquez, who maintains gstreamer-vaapi and helped me solving this problem.

    My workflow for this was converting all tapes into WebM and then cutting them in the different relevant pieces with PiTiVi running GStreamer Editing Services both co-maintained by my colleague at Igalia, Thibault Saunier.

    22 de October de 2019

    Testing Indico opensource event management software

    Indico event management tool

    After organizing a bunch of conferences in the past years I found some communities had problems choosing a conference management software. One alternative or others had some limitations in one way or another. In the middle I collected a list of opensource alternatives and recently I’m very interested in Indico. This project is created and maintained by the CERN (yes, those guys who invented the WWW too).

    The most interesting reasons for me are:

    Jornadas WMES 2019

    With the help of Franc Rodríguez we set up an Indico testing instance at https://indico.olea.org. This system is ready to be broken so feel free to experiment.

    So this post is an invitation to any opensource community wanting to test the feasiability of Indico for their future events. Please consider to give it an opportunity.

    Here are some items I consider relevant for you:

    And some potential enhancements (not fully check if currently available or not):

    • videoconf alternatives: https://meet.jit.si
    • social networks integration
      • Twitter
      • Mastodon
      • Matrix
    • exports formats
      • pentabarf
      • xcal, etc
    • full GDPR compliance (seems it just need to add the relevant information to your instance)
    • gravatar support
    • integration with SSO used by the respective community (to be honest I didn’t checked the Flask-Multipass features)
    • maybe a easier inviting procedure: sending inviting links to an email for full setup;
    • map integration (OSM and others).

    For your tests you’ll need to register at the site and contact me (look at the botton of this page) to add you as a manager of your community.

    I think it would be awesome for many communities sharing a common software product. Isn’t it?

    PD: Great news, next March CERN will host an Indico meeting!
    PPD: Here you can check a full configured event organized by the Libre Space Foundation people: Open Source CubeSat Workshop 2019.
    PPPD: And now I got your attention check our Congress/Conference organization tasks list. It’s free!

    17 de October de 2019

    Gnome-shell Hackfest 2019 – Day 3

    As promised, some late notes on the 3rd and last day of the gnome-shell hackfest, so yesterday!

    Some highlights from my partial view:

    • We had a mind blowing in depth discussion about the per-crtc frame clocks idea that’s been floating around for a while. What started as “light” before-bedtime conversation the previous night continued the day after straining our neurons in front of a whiteboard. We came out wiser nonetheless, and have a much more concrete idea about how should it work.
    • Georges updated his merge request to replace Cogl structs with graphene ones. This now passes CI and was merged \o/
    • Much patch review happened in place, and some other pretty notable refactors and cleanups were merged.
    • The evening was more rushed than usual, with some people leaving already. The general feeling seemed good!
    • In my personal opinion the outcome was pretty good too. There’s been progress at multiple levels and new ideas sparked, you should look forward to posts from others :). It was also great to put a face to some IRC nicks, and meet again all the familiar ones.

    Kudos to the RevSpace members and especially Hans, without them this hackfest couldn’t have happened.

    16 de October de 2019

    Gnome-shell Hackfest 2019 – Day 2

    Well, we are starting the 3rd and last day of this hackfest… I’ll write about yesterday, which probably means tomorrow I’ll blog about today :).

    Some highlights of what I was able to participate/witness:

    • Roman Gilg of KDE fame came to the hackfest, it was a nice opportunity to discuss mixed DPI densities for X11/Xwayland clients. We first thought about having one server per pixel density, but later on we realized we might not be that far from actually isolating all X11 clients from each other, so why stop there.
    • The conversation drifted into other topics relevant to desktop interoperation. We did discuss about window activation and focus stealing prevention, this is a topic “fixed” in Gnome but in a private protocol. I had already a protocol draft around which was sent today to wayland-devel ML.
    • A plan was devised for what is left of Xwayland-on-demand, and an implementation is in progress.
    • The designers have been doing some exploration and research on how we interact with windows, the overview and the applications menu, and thinking about alternatives. At the end of the day they’ve demoed to us the direction they think we should take.

      I am very much not a designer and I don’t want to spoil their fine work here, so stay tuned for updates from them :).

    • As the social event, we had a very nice BBQ with some hackerspace members. Again kindly organized by Revspace.

    14 de October de 2019

    Gnome-shell Hackfest 2019 – Day 1

    So today kickstarted the gnome-shell hackfest in Leidschendam, the Netherlands.

    There’s a decent number of attendants from multiple parties (Red Hat, Canonical, Endless, Purism, …). We all brought various items and future plans for discussion, and have a number of merge requests in various states to go through. Some exciting keywords are Graphene, YUV, mixed DPI, Xwayland-on-demand, …

    But that is not all! Our finest designers also got together here, and I overheard they are discussing usability of the lock screen between other topics.

    This event wouldn’t have been possible without the Revspace hackerspace people and specially our host Hans de Goede. They kindly provided the venue and necessary material, I am deeply thankful for that.

    As there are various discussions going on simultaneously it’s kind of hard to keep track of everything, but I’ll do my best to report back over this blog. Stay tuned!

    13 de October de 2019

    Jornadas Wikimedia España WMES 2019: Wikitatón de patrimonio inmueble histórico de Andalucía

    Jornadas WMES 2019

    En la última entrada ya mencioné que dirigiré un taller sobre edición con Wikidata en las Jornadas Wikimedia España 2019. Aquí presento los enlaces y referencias que usaremos en el taller. Nos centramos en el caso del patrimonio histórico inmueble andaluz porque llevo un tiempo trabajando con él y estoy familiarizado, pero es extrapolable a cualquier otro ámbito semejante.

    Quiero animar a cualquier interesado a participar sin importar tu experiencia con Wikidata. Creo que merecerá la pena. Lo que sí os ruego, por favor, es que todos traigáis ordenador portátil.

    Referencias oficiales

    Principales servicios Wikimedia de nuestro interés

    Material relacionado en los proyectos Wikimedia:

    Consultas SPARQL a Wikidata relacionadas:

    Otros servicios externos de interés:

    Ejemplos de monumentos

    Usaremos unos ejemplos como material de referencia. Es muy relevante el de la Alhambra porque es la entrada de la guía de Ándalucía con más datos de todo el catálogo, con mucha ventaja.

    Alhambra de Granada

    Puente del Hacho

    Estación de Renfe de Almería

    Agradecimientos

    Jornadas WMES 2019

    Mi asistencia a las jornadas ha sido posible gracias al soporte económico de la asociación Wikimedia España. Desde aquí mi agradecimiento.

    10 de October de 2019

    Next conferences

    Just to say I’m going to a pair of conferences here in Spain:

    At WMES 2019 I will lead a Wikidata workshop about adding historical heritage data, basically repeating the one at esLibre.

    At LAS 2019 I plan to attend to the Flatpak workshops and to call for a BoF for people involved in opensource conference organizations to share experiences and reuse tools.

    Lots of thanks for the Wikimedia España association and GNOME Foundation for their travel sponsorship. Without their help I could not attend both.

    See you in Pamplona and Barcelona.

    01 de October de 2019

    A new time and life next steps

    the opensource symbol

    Since the beginning of my career in 1998 I’ve been related with Linux and opensource in me or other way. From sysadmin I grow to distro making, hardware certification and finally consulting, plus some other added skills. Parallel I developed a personal career in libre software communities and got the privilege to give lots of talks particularly in Spain and Ibero-America. That was a big time. All this stopped in 2011 with the combination of the big economic crisis in Spain and a personal psychological situation. All lead me to go back from Madrid to my home city, Almería, to look for health recovering. Now, after several years here I’m ready to take a new step and reboot my career.

    Not all this time has been wasted. I dedicated lots of hours to a new project which in several senses has been the inverse of the typical practices in opensource communities. Indeed, I’ve tried to apply most of them but instead in the world-wide Internet now with a 100% hyper-local focus. This mean working in the context of a medium-small city (less than 200k inhabitants) with intensive in-person meetings and Internet communications support. Not all the results has been as successful as I pretended, probably because I kept very big expectations; as Antonio Gramsci said «I’m a pessimist because of intelligence, but an optimist because of will» :-) The effort was developed in what we named HackLab Almería and some time ago I wrote a recap about my experience. To me was both an experiment and a recovering therapy.

    That time worked to recover ambitions, a la Gramsci, and to bring relevant important and itinerant events to our nice city, always related with opensource. Retaking the experience of the good-old HispaLinux conferences we were able of hosting a set of extraordinary great technological conferences: from PyConES 2016 to Akademy 2017, GUADEC 2018 and LibreOffice Conference 2019. For some time I thought Almería was the first city to host these three… after I realized Brno did it before! The icing of the cake was the first conference on secure programming in Spain: SuperSEC. I consider all of this a great personal success.

    Forgot to mention I enrolled in a university course too, more as a excuse to work in an area for which I have never found time: information and software methodology modeling. This materializes in my degree project, in advanced development state but not yet finished, around the ISO/IEC 29110 norm and the EPF Composer. I’m giving it a final push in the coming months.

    Now I’m closing this stage to start a new one, with different priorities and goals. First one is to reboot my professional career, so I’m looking for a new job and started a B2 English certification course. I’m resuming my participation in opensource communities —I’ll attend LAS 2019 next November— and hope to contribute with small but not trivial collaborations to several communities. After all I think the most I’ve been doing all these years has been just shepherding the digital commons.

    See you in your recruitment process! ;-)

    PS: this is an Spanish version of this post.

    30 de September de 2019

    Nueva etapa: cambio de época y futuro profesional

    Cambiamos de tercio a más de lo mismo pero de otra manera

    the opensource symbol

    Desde que empecé mi carrera profesional en 1998 casi siempre he estado relacionado con el mundo Linux y el software libre. Si bien no ha sido demasiado brillante tampoco me arrepiento tanto, como se suele decir, de lo que he hecho que de lo que no he hecho y de las oportunidades que hubiera podido explotar. Pero todo cambió en 2011 cuando combinaron la crisis económica española, problemas laborales y, sobre todo, personales que me obligaron a volver desde Madrid a mi ciudad de origen, Almería, para recuperarme. Ha tomado tiempo pero parece que lo hemos conseguido. Fue en esta época cuando surgió lo que acabamos denominando HackLab Almería.

    Personalmente la actividad en el HLA fue un experimento para aplicar el bagaje adquirido en conocimientos y prácticas en comunidades abiertas opensource durante más de 10 años pero, en este caso, con un enfoque totalmente inverso: de comunidades principalmente telemáticas con alcance incluso mundial a la orientación radicalmente _hiperlocal_con obligado e intenso ámbito presencial. En aquel momento tenía mucho tiempo disponible y me volqué en crear contenido, identificar y establecer contactos personales y dinamizar una nueva comunidad que pudiera alcanzar inercia y masa crítica autosostenible. También fue en aquella época que en una puntual visita a Madrid —por entonces mi actividad viajera se había reducido a casi cero— tras una motivadora conversación con ALMO empecé a recuperar ilusión perdida y afán de creación que finalmente cristalizaron en una actividad intensa durante meses que sin ataduras profesionales o económicas también sirvió para recuperar habilidades y cultivar otras nuevas, profundizando en un proyecto alineado con mi experiencia y lo suficientemente interesante para mantener permanentemente mi interés. Tiempo útil como terapia para recuperar autoestima, paz de espíritu y rendimiento intelectual.

    De camino aproveché para reforzar mi prácticas de la ética hacker: durante años he sido un gran diletante con, tal vez, muchas cosas que decir pero con muy poco impacto. Y esa no es una bonita sensación para un narcisista. Así pues decidí esforzarme en hablar menos y hacer más. Del grado de consecución se podría hablar aparte en otro momento aunque en su momento redacté una retrospectiva. También dediqué interés a profundizar en el conocimiento abierto y los procomunes digitales: MusicBrainz, Wiki Commons, Wikidata, OpenStreetMap, etc.

    Por entonces y prácticamente por casualidad se dibujó la oportunidad de traer encuentros tecnológicos importantes —de una manera u otra siempre relacionados con el opensource— a esta mi ciudad, periférica en la periferia y, tal vez, la única isla de España sita en la propia península Ibérica. Si ya tenía experiencia previa promoviendo y colaborando en aquellos congresos HispaLinux el trabajo en PyConES 2016 —gracias Juanlu por la confiaza— fue un salto cualitativo que después se materializó en la celebración de Akademy 2017, GUADEC 2018 y LibreOffice Conference 2019 en Almería. Por algún tiempo pensé que la nuestra sería la primera ciudad en conseguir este triplete… hasta que descubrí que Brno se nos había adelantado :-) Por el camino también inventamos SuperSEC, el primer congreso nacional de programación segura en España.

    Ahora doy por finalizada esta etapa en parte bastante frustrado. No estoy satisfecho con todos los resultados, en particular con el impacto local. Mientras preparaba este artículo había pensado entrar en algunos detalles descriptivos pero… ¿para qué? Quien podría haberse interesado no lo hizo en su momento y a mi aún me dolería entrar en retrospectiva y… finalmente ¿para qué? para ser otro lapso desvanecido en la entropía. Sí que me quedo tranquilo de connciencia porque sé que, mejor o peor, me entregué al máximo.

    Así pues: cambio de tercio. Un 1 de octubre no es mala fecha para hacerlo. Vuelvo a volcarme en desarrollar mi perfil profesional y, atención querido público, busco trabajo. Obviamente cuanto más próximo y relacionado con el mundillo del software libre y anejos mucho mejor. Y es que aún queda muchísimo por hacer para construir la infraestructura digital libre necesaria para una sociedad digital abierta y quiero seguir siendo parte. Al fin y al cabo creo que todo lo que he hecho desde los años 90 ha sido pastorear los procomunes digitales.

    Nos vemos en vuestro proceso de reclutamiento ;-)

    PS: esta es la versión en inglés de este artículo.

    23 de September de 2019

    LibreOffice Conference 2019 by numbers

    LibreOffice Conference 2019 badge

    LibreOffice Conference 2019 ended and… seems people really enjoyed!

    Here I provide some metrics about the conference. Hopefully they’ll be useful for next years.

    • Attendees:
      • 114 registered at website before Aug 31 deadline;
      • 122 total registered at the end of the conference;
      • 102 total of phisically registered at the conference.
    • Registered countries of origin: Albania, Austria, Belgium, Bolivia, Brazil, Canada, Czech Republic, Finland, France, Germany, Hungary, India, Ireland, Italy, Japan, Korea - Republic of, Luxembourg, Poland, Portugal, Romania, Russian Federation, Slovenia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey and United KingdomM
    • 4 days: 1 for board and community meetings and 3 for conference talks;
    • 3 tracks;
    • 68 talks, 6 GSoC presentations and 13 lightning talks;
    • 1 new individual certification;
    • 4 social events:
      • welcome party, 70 participants;
      • beach dinner party, 80 participants;
      • teathrical visit to the Alcazaba castle, 50 participants;
      • after conference city visit, 14 participants;
    • 1 hackfest, approximately 50 participants;
    • 1 conference shuttle service bus (capacity for more than 120 persons);
    • Telegram communications:
    • Conference pictures, at least:
    • Weather: two completely unexpected rainy days in Almería o_0
    • About economics, the conference ended with some superavit, which is nice. Thanks a lot to our sponsors for making this possible.

    Next are a list of data tables with other more information.

    Meals at university cafeteria:

        Sept. 10   Sept. 11   Sept. 12   Sept 13   Total
    meals: expected 70 106 106 107 389
    meals: served 54 92 97 86 329


    T-shirts, ordered to our friends of FreeWear:

    type   size (EU)   number
    unisex S 9
    unisex M 24
    unisex L 36
    unisex XL 15
    unisex XXL 15
    unisex XXXL 7
    unisex - tight S 1
    unisex - tight M 4
    unisex - tight L 2
      total 113


    The LibOCon overnight stays at Civitas were:

    day   number
    2019/09/05 1
    2019/09/06 1
    2019/09/07 5
    2019/09/08 32
    2019/09/09 57
    2019/09/10 75
    2019/09/11 77
    2019/09/12 77
    2019/09/13 64
    2019/09/14 13
    2019/09/15 3
    2019/09/16 3
    total overnights: 408


    Twitter campaign activity at @LibOCon:

    Month  tweets   impressions   profile visits   mentions   new followers
    Apr 2 2321 228 9 10
    May 6 8945 301 6 19
    Jun 3 3063 97 3 5
    Jul 3 5355 188 3 13
    Aug 10 8388 208 10 2
    Sept 75 51200 1246 158 (not available)
    totals: 99 79272 2268 189 49



    PS: I’m amazed I’ve not blogged almost nothing about the conference until now!!
    PD: Added the overnight numbers at the conference hotel.

    30 de July de 2019

    HackIt 2019, level 3³

    Creo que esta prueba nos llevó más del 50% del tiempo del HackIt de este año :-O , pero es el tipo de prueba que nos encanta: sabes lo que hay que hacer, pero es un camino tortuoso, doloroso y complejo. A por ello 🙂

    El título de la prueba siempre lleva alguna pista a modo de juego de palabras. Ese cubo en forma de superíndice…

    Analizamos el dump y vemos que se trata de un pcap. Lo abrimos con Wireshark y curioseamos un rato.

    No puede faltar una prueba con Wireshark en un HackIt que se precie 🙂

    Ese puerto tcp/25565 se nos hace conocido…

    También se podía deducir que era una captura del protocolo de Minecraft mirando los strings. Aparece algo como «generic.movementSpeed?». Buscándolo en Google nos lleva a Minecraft, sin duda.

    Yep, Minecraft. En el servidor 51.15.21.7. Aquí otra vez fuimos troleados por @imobilis… o tal vez se trataba de un easter-egg en la prueba 🙂 El caso es que ese servidor existe (!) y tiene un mundo en el que apareces encima de una torre de la que no es posible salir. Incluso tiene mensajes en algunos carteles (por supuesto los probamos todos, sin éxito), como el de la imagen (Mundo Survival Kots)

    Anda que no estuvimos tiempo «jugando» en esta torre. Los mensajes son pistas falsas.

    El dump tiene mensajes enviados del cliente (10.11.12.52) al servidor (51.15.21.7) y viceversa. El payload de los mensajes es (parecía!) claro y se puede extraer con tshark.

    $ tshark -r dump -T fields -e data
    
    1b0010408d2e07aeae7d91401400000000000040855ae9b632828401
    12004a0000000059c86aa10000000000001ac9
    0a0021000000028daf8dbd
    0a000e000000028daf8dbd

    Aquí nos las prometíamos muy felices, porque vimos que había analizadores del protocolo Minecraft para Wireshark, como este o este. Todo muy de color rosa… hasta que nos fijamos en la fecha del último commit: 2010. Qué bien… no nos valen para nada. Así que, nos remangamos, fuimos a por café, y nos pusimos a estudiar la especificación del protocolo Minecraft, que está escrito por alguien que parece que tomaba apuntes de una charla, más que una especificación bien redactada. Hay exactamente 0 ejemplos de las partes más engorrosas (VarInt, packets with compression, …) En fin, nuestro compañero Joserra, un Excel wizard, decidió que nuestros scripts eran una **** mierda y que lo iba a hacer en Excel ¯_(ツ)_/¯

    Si tomamos el primer payload, 001b es el tamaño del paquete (27 bytes), 0x10 el packetID y 408d2e07aeae7d91401400000000000040855ae9b632828401 el payload del paquete. El 0x10 es el ID de un paquete de tipo «Player Position» (Bound to server indica que es el cliente el que le envía al servidor). El payload se divide en 4 campos: x (double), feet y (double), z (double), «on ground» (boolean). Todos los paquetes de posición (0x10, server bound) son impares, por lo que terminan en 1 (true, on ground). Nos interesa conocer x, y, z.

    x= 408d 2e07 aeae 7d91
    y = 4014 0000 0000 0000
    z = 4085 5ae9 b632 8284

    Para pasar de hex a double, invocamos una macro, hex2dbl

    No es la primera vez que resolvemos una prueba con Excel 🙂

    y obtenemos las posiciones x,y,z.

    Finalmente, generamos un gráfico de dispersión y obtenemos la clave 🙂

    @imobilis tuvo que pasarse horas para conseguir mover el jugador de Minecraft por el mapa hasta conseguir trazar el texto. Si nos fijamos siempre empieza de un punto, baja y vuelve a subir a ese punto para trazar la siguiente letra. Analizando el payload, la altura de esa zona superior es distinta a la altura de donde dibuja las letras. Probablemente. en el juego tenía una especie de escalón que le marcaba la zona «segura» (donde se podía desplazar hacia la derecha, para pintar la siguiente letra). ¡Menudo curro!

    Atentos a las mayúsculas, minúsculas, 0 vs. O, 1 vs. I, etc… Fue la troleada final a una buena prueba 🙂

    BLoCkD3f1nEdPrOt0coL

    UPDATE: @navarparty (los primeros en lograr superar este reto) ha publicado su solución (en Go!). Thanks @tatai!
    También recomiendo leer el write-up de w0pr y su elegante solución en Python + pygame.

    29 de July de 2019

    HackIt! 2019, Level 2

    Este level parece que se le atragantó a muchos grupos. Aunque estuvimos unas cuantas horas dándole vueltas, una vez resuelto te das cuenta de que, lo que lo hacía complejo, realmente eran varios red-herring o falsas pistas. Si las seguías, estabas muerto. El level empieza con 3 ficheros: yellow, red, green. Aquí está el primer anzuelo: ¿para qué estos colores?… En fin, sacando strings, el que más llama la atención es red.

    Juanan-2:2 juanan$ strings -n 12 red|more
    Ktablered1000red1000^LCREATE TABLE red1000(redz blob)N
    ytablered100red100
    CREATE TABLE red100(reda varchar(10),redb varchar(10))H
    utablered9red9
    CREATE TABLE red9(reda varchar(10),redb varchar(10))H
    utablered8red8
    CREATE TABLE red8(reda varchar(10),redb varchar(10))H
    utablered7red7
    CREATE TABLE red7(reda varchar(10),redb varchar(10))H
    utablered6red6
    CREATE TABLE red6(reda varchar(10),redb varchar(10))H
    utablered5red5
    CREATE TABLE red5(reda varchar(10),redb varchar(10))H
    utablered4red4
    ...
    CREATE TABLE red1(reda varchar(10),redb varchar(10))
    0000000 5473 6572 6d34 3352 000a

    Vaya… una base de datos, probablemente SQLite. Y el campo redz de la tabla red1000 es de tipo blob. Estuvimos dándole vueltas y vueltas a esto. Conseguimos incluso importar la estructura de las tablas.

    En la tabla red1, la columna reda tiene algo:

    Pero eso ya salía en los strings, no hacía falta liarse la manta con SQLite… Mmmh, veamos qué significa:

    misterio = [0x54,0x73,0x65,0x72,0x6d,0x34,0x33,0x52,0x00,0x0a]
    import binascii
    print("".join( chr(c) for c in misterio))
    Tserm43R

    ¿Tserm43R? WTF? @ochoto comentó en el grupo que tal vez habría que darle la vuelta a cada par de valores (big endian?) porque los últimos bytes son un salto de línea + fin del string invertidos (0x00, 0x0a). Vamos allá (quitando el salto de línea):

    misterio = [0x54,0x73,0x65,0x72,0x6d,0x34,0x33,0x52]
    "".join([chr(a)+chr(b) for a,b in [i for i in zip(misterio[1::2], misterio[::2])]])
    
    'sTre4mR3'

    Tiene sentido, parece un trozo de string en h4x0r. Dejémoslo ahí y vayamos a por green. Este fue más fácil:

    $ binwalk -e green
    
    DECIMAL       HEXADECIMAL     DESCRIPTION
    --------------------------------------------------------------------------------
    27337196      0x1A121EC       HPACK archive data
    33554432      0x2000000       gzip compressed data, has original file name: "trololo", from Unix, last modified: 2019-07-15 23:29:50
    
    $ ls -al _green.extracted/
    total 8
    drwxr-xr-x   3 juanan  wheel    96 Jul 25 21:28 .
    drwxrwxrwt@ 70 root    wheel  2240 Jul 29 22:15 ..
    -rw-r--r--   1 juanan  wheel     8 Jul 25 21:28 trololo
    
    $ cat _green.extracted/trololo
    ce1VEd!

    Vaya, si concatenamos red con green (mismo orden que el enunciado), obtenemos ‘sTre4mR3ce1VEd!’. Tiene muy buena pinta. Sólo nos queda un fichero, yellow. Es un fichero binario, sin ningún magic number ni strings asociados. Tras muchas vueltas, se nos ocurrió algo evidente (a que sí, @navarparty? XDDD), abrirlo con Audacity:

    Bingo, se oye a alguien deletreando, en inglés y a mucha velocidad, la parte que nos falta del password. Ajustando la velocidad y teniendo en cuenta que las zonas más oscuras de la señal reflejan mayúsculas, obtenemos R0tT3nB1t.

    Así que… R0tT3nB1tsTre4mR3ce1VEd!

    Nota: este post no refleja la dificultad de la prueba. No fue «tan fácil» como parece 🙂 Estuvimos muuuuuuuuuucho tiempo analizando los 3 binarios hasta encontrar la secuencia de pasos y herramientas adecuadas.

    HackIt! 2019, level 1

    Un año más, y van ya 20, asistimos a la Euskal Encounter con ganas de darlo todo, en especial al HackIt! y al CTF. En este HackIt! de la EE27 hemos sudado la gota gorda para pasar 3 pruebas de 6, logrando un segundo puesto, lo cual indica el nivel de dificultad. Eso sí, todas ellas han sido pensadas y muy curradas, por lo que lo primero, como siempre, es agradecer el trabajo de @imobilis y @marcan42. La verdad, sólo imaginar lo que costó implementar alguna de ellas (el level 3 de Minecraft en concreto pudo ser un dolor… o el 6, con la tira de leds y el LFSR en la imagen) hace que quiera invitarles a algo para que vuelvan el año que viene con nuevas ideas 🙂 En fin, entremos en harina, level1, Extreme Simplicity.

    Abrimos el código fuente y vemos el siguiente trozo en JS:

    function q(e){var a=">,------------------------------------------------------------------------------------[&lt;+>[-]],----------------------------------------------------[&lt;+>[-]],------------------------------------------------------------------------------------------------------------------[&lt;+>[-]],----------------------------------------------------------------------------------------------------------------[&lt;+>[-]],-------------------------------------------------[&lt;+>[-]],--------------------------------------------------------------------------------------------------------------------[&lt;+>[-]],-----------------------------------------------------------------------------------[&lt;+>[-]],-------------------------------------------------------------------[&lt;+>[-]],------------------------------------------------------------------------------------------------------------------[&lt;+>[-]],-------------------------------------------------[&lt;+>[-]],----------------------------------------------------------------------------------------------------------------[&lt;+>[-]],------------------------------------------------------------------------------------[&lt;+>[-]],[&lt;+>[-]][-]+&lt;[>>>++[>+++[>+++++++++++++++++++&lt;-]&lt;-]>>.-------------.-.&lt;&lt;&lt;&lt;[-]&lt;[-]]>[>>>++[>+++[>+++++++++++++++++&lt;-]&lt;-]>>+.[>+>+&lt;&lt;-]>+++++++++++.>--..&lt;----.&lt;&lt;&lt;[-]]";let r=0,f=0;var i=a.length,c=new Uint8Array(3e4),s="",b=10240,k=0;for(r=0;r&lt;i&amp;&amp;!(b&lt;0);r++)switch(b--,a[r]){case">":f++;break;case"&lt;":f>0&amp;&amp;f--;break;case"+":c[f]=c[f]+1&255;break;case"-":c[f]=c[f]-1&255;break;case".":s+=String.fromCharCode(c[f]);break;case",":k>=e.length?c[f]=0:c[f]=e.charCodeAt(k),k++;break;case"[":if(!c[f])for(var t=0;a[++r];)if("["===a[r])t++;else if("]"===a[r]){if(!(t>0))break;t--}break;case"]":if(c[f])for(t=0;a[--r];)if("]"===a[r])t++;else if("["===a[r]){if(!(t>0))break;t--}}return s}
    $(function(){$('#password').keyup(function(e){$('#password').css({'background-color':q($('#password').val())});});});

    Aquí empezamos a perder el tiempo (business as usual 🙂 debugueando con las DevTools. Creamos un nuevo snippet, pegamos el código, pulsamos en {} para un pretty-print, insertamos una última línea: console.log( q(‘password’) ), metemos un breakpoint en la línea 2 de q() y
    ejecutamos la función paso a paso… Bien, así se podría resolver, pero nos llevaría unas horas… Alguien del grupo, con muy buen criterio, no sólo vió que ese código era BrainFuck, sino que pensó que traducirlo a lenguaje C era un buen primer paso. Clonamos este traductor, lo ejecutamos sobre el BrainFuck y obtenemos este sencillo programa.

    Si nos fijamos, vemos el código de varios caracteres ASCII (84, 52, 114…), así que, antes de nada, probamos esa secuencia y… ¡Bingo!

    file = open("bf.c", "r")
      for line in file:
        match = re.search(r'tape.*-= ([0-9]*)', line)
        if (match):
          if int(match.group(1)) > 13:
            print(chr(int(match.group(1))), end='')

    21 de July de 2019

    What am I doing with Tracker?

    “Colored net”by Chris Vees (priorité maison) is licensed under CC BY-NC-ND 2.0

    Some years ago I was asked to come up with some support for sandboxed apps wrt indexed data. This drummed up into Tracker 2.0 and domain ontologies, allowing those sandboxed apps to keep their own private data and collection of Tracker services to populate it.

    Fast forward to today and… this is still largely unused, Tracker-using flatpak applications still whitelist org.freedesktop.Tracker, and are thus allowed to read and change content there. Despite I’ve been told it’s been mostly lack of time… I cannot blame them, domain ontologies offer the perfect isolation at the cost of the perfect duplication. It may do the job, but is far from optimal.

    So I got asked again “we have a credible story for sandboxed tracker?”. One way or another, seems we don’t, back to the drawing board.

    Somehow, the web world seems to share some problems with our case, and seems to handle it with some degree of success. Let’s have a look at some excerpts of the Sparql 1.1 recommendation (emphasis mine):

    RDF is often used to represent, among other things, personal information, social networks, metadata about digital artifacts, as well as to provide a means of integration over disparate sources of information.

    A Graph Store is a mutable container of RDF graphs managed by a single service. […] named graphs can be added to or deleted from a Graph Store. […] a Graph Store can keep local copies of RDF graphs defined elsewhere […] independently of the original graph.

    The execution of a SERVICE pattern may fail due to several reasons: the remote service may be down, the service IRI may not be dereferenceable, or the endpoint may return an error to the query. […] Queries may explicitly allow failed SERVICE requests with the use of the SILENT keyword. […] (SERVICE pattern) results are returned to the federated query processor and are combined with results from the rest of the query.

    So according to Sparql 1.1, we have multiple “Graph Stores” that manage multiple RDF graphs. They may federate queries to other endpoints with disparate RDF formats and whose availability may vary. This remote data is transparent, and may be used directly or processed for local storage.

    Let’s look back at Tracker, we have a single Graph Store, which really is not that good at graphs. Responsibility of keeping that data updated is spread across multiple services, and ownership of that data is equally scattered.

    It snapped me, if we transpose those same concepts from the web to the network of local services that your session is, we can use those same mechanisms to cut a number of drawbacks short:

    • Ownership is clear: If a service wants to store data, it would get its own Graph Store instead of modifying “the one”. Unless explicitly supported, Graph Stores cannot be updated from the outside.
    • So is lifetime: There’s been debate about whether data indexed “in Tracker” is permanent data or a cache. Everyone would get to decide their best fit, unaffected by everyone else’s decisions. The data from tracker-miners would totally be a cache BTW :).
    • Increases trustability: If Graph Stores cannot be tampered with externally, you can trust their content to represent the best effort of their only producer, instead of the minimum common denominator of all services updating “the Graph Store”.
    • Gives a mechanism for data isolation: Graph Stores may choose limiting the number of graphs seen on queries federated from other services.
    • Is sandboxing friendly: From inside a sandbox, you may get limited access to the other endpoints you see, or to the graphs offered. Updates are also limited by nature.
    • But works the same without a sandbox. It also has some benefits, like reducing data duplication, and make for smaller databases.

    Domain ontologies from Tracker 2.0 also handle some of those differently, but very very roughly. So the first thing to do to get to that RDF nirvana was muscling up that Sparql support in Tracker, and so I did! I already had some “how could it be possible to do…” plans in my head to tackle most of those, but unfortunately they require changes to the internal storage format.

    As it seems the time to do one (FTR, storage format has been “unchanged” since 0.15) I couldn’t just do the bare minimum work, it seemed too much of a good opportunity to miss, instead of maybe making future changes for leftover Sparql 1.1 syntax support.

    Things ended up escalating into https://gitlab.gnome.org/GNOME/tracker/commits/wip/carlosg/sparql1.1, where It can be said that Tracker supports 100% of the Sparql 1.1 syntax. No buts, maybe bugs.

    Some notable additions are:

    • Graphs are fully supported there, along with all graph management syntax.
    • Support for query federation through SERVICE {}
    • Data dumping through DESCRIBE and CONSTRUCT query forms.
    • Data loading through LOAD update form.
    • The pesky negated property path operator.
    • Support for rdf:langString and rdf:List
    • All missing builtin functions

    This is working well, and is almost drop-in (One’s got to mind the graph semantics), making it material for Gnome 3.34 starts to sound realistic.

    As Sparql 1.1 is a recommendation finished in 2013, and no other newer versions seem to be in the works, I think it can be said Tracker is reaching maturity. Only HTTP Graph Store Protocol (because why not) remains the big-ish item to reasonably tell we implement all 11 documents. Note that Tracker’s bet for RDF and Sparql started at a time when 1.0 was the current document and 1.1 just an early draft.

    And sandboxing support? You might guess already the features it’ll draw from. It’s coming along, actually using Tracker as described above will go a bit deeper than the required query language syntax, more on that when I have the relevant pieces in place. I just thought I’d stop a moment to announce this huge milestone :).

    12 de July de 2019

    «La película», un relato corto

    Rescato de mis archivos una pequeña historia que escribí en marzo de 1998. Tal vez no sea una gran cosa pero creo que el resultado es bonito.


    La tarde prometía ser fantástica. Lo prometía el propio ciclo de cine. Mis amigos estaban haciendo un gran trabajo en aquel cineclub y por fin tenía la oportunidad de ver Dune, una de las películas más especiales del cine de ciencia ficción. Un buen momento para pasarlo bien, un buen momento para conocer el mundo un poco más. No llegaba siquiera a los diez y seis años.

    Llegué tarde y sólo pude coger sitio atrás. No era lo mejor pero al menos no me quedé fuera como otros. A mi lado estaba sentado Bearn. Éste Bearn era un tipo muy raro. Demasiado para mí en aquel entonces —ahora el freak soy yo— pero no era mal compañero: divertido y a su rollo. Apenas nos tratamos y supongo que su impresión de mí no debía ser demasiado diferente. Poco importa eso al fin y al cabo. Sólo es relevante el hecho de que él fue el único testigo de mi amargura, aquella tarde que aparece hoy en mi memoria.

    No sé exactamente cuando ocurrió. Sólo sabría decir que fue en el segundo curso del BUP. No sé ni en qué mes ni en qué estación. Apenas recuerdo al chico, y de ella, sólo una bruma, un deseo, una imagen de beldad liberada de todo defecto. Como todo recuerdo que se precie.

    Supongo que no tardé demasiado tiempo en darme cuenta de la extraña familiaridad de la chica que ocupaba la silla frente mí. Esa capacidad de reconocimiento está activada en modo automático en mí desde entonces. Aunque debo señalar al lector que ahora no hay un motivo especial para ello; simplemente es una costumbre y no me daña. Decía pues que pronto reconocí en los rasgos de aquella chica a los de la persona que me enajenaba desde hacía semanas. Meses. Por entonces no sé si ya le había mostrado mi amor —mi primera y única declaración de amor, un triste y solitario te quiero en la puerta del instituto en el único segundo que pude estar a solas con ella— el caso era que estaba allí y yo no estaba prevenido. Mi inquieto ego de amante juvenil se puso nervioso y mi sentido de observación se agudizó hasta la paranoia. Estaba sola. No podía estar sola una chica así. Nunca lo están. Se mueven a la sombra de un macho cuando no protegidas por la invulnerable empalizada de sus amigas. Y ella estaba sola. Mis ojos rastreaban todo el espacio alrededor suyo, sospechando aun del aire que respiraba. Y vinieron a pararse sobre un tipo sentado a su izquierda. Lo conocía muy poco, parecía buena gente y salió un par de veces con una compañera de clase. Un buen tipo que no encajaba en el asunto. Quedé perplejo cuando comprobé que realmente venían juntos. La película hacía minutos que había comenzado.

    El desamor en la juventud es algo muy intenso. Está vacío de toda realidad pero luego, con el tiempo, se añorará la pasión. Cuando los años han quemado el alma el desamor solo es amargura que aviva el fuego. En los malditos años de juventud es una virtud heroica. Tan estúpida como todas brillaba igualmente con desgarradora belleza. Aquella tarde, tal vez de invierno, mi corazón saltó en pedazos prendida la mecha con la chispa de dos manos que se cogieron. Y ninguna era mía. Aquella tarde el mundo se me vino encima, en paralelo al viaje iniciático del joven Atreides. Aquella tarde llené los lagos ocultos de Dune con mis lágrimas. Convidado de piedra, con el cielo rozando la punta de mis dedos, viví mi destierro del corazón en el desierto de un planeta desierto que por grande que hubiera sido nunca llenaría la soledad de mi pobre alma autocompadecida. Esta noche la película era otra. El desierto es el mismo.


    FidoNET R34: recuperando correo de las áreas de echomail

    Esta entrada ha sido originalmente publicada en el foro de esLibre https://charla.eslib.re.


    FidoNet logo

    Hace a unos pocos meses me propuse recuperar material digital en mis archivos sobre los primeros años de la comunidad Linux en España, en particular mis archivos de FidoNet. Surgió entonces una conversación en Twitter acerca del echomail R34.Linux y de la posibilidad de recuperar correo de aquella época para rescatarlo y republicarlo:

    Me pareció maravillosa la iniciativa de Kishpa_, pero al consultar mis datos encontré que en algún momento sufrí un casque de la base de mensajes y perdí todo el correo de ¿cinco años? o más. El súbito recuerdo de aquel día dolió casi tanto como entonces.

    GoldED editor

    Dado que muchos de los que anduvimos en los albores de HispaLinux nos movimos a partir de FidoNET mi consulta es la siguiente: ¿por alguna casualidad alguien ha superado las vicisitudes de la persistencia de la información a lo largo de las décadas y dispone de sus archivos FidoNET para recuperarlos y republicarlos? No sólo el correo de los áreas R34.Linux y R34.Unix, donde realmente nació todo, sino cualquier otro correo echomail archivado.

    En caso positivo hagámoslo llegar a Kishpa_. Es un proyecto bonito de recuperación de memoria digital aunque sólo sea a efectos de archivo.

    Venga: ¡todos a arrebuscar en nuestras copias de seguridad!

    Copia de la web de HispaLinux en 1998

    Aprovechando la sesión de examen de mis archivos noventeros cuelgo en esta web una instantánea de la web de la asociación HispaLinux de marzo de 1998:

    GoldED editor

    Ya sabéis: no lo hago por nostalgia sino por memoria digital.

    02 de July de 2019

    Inaugurado http://charla.eslib.re

    comunidad esLibre

    Tengo el placer de anunciar que ya está levantado un nuevo foro de discusión web para la comunidad esLibre: https://charla.eslib.re. Este es otro paso más promoviendo la regeneración de lo que fue la comunidad HispaLinux en España en un nuevo futuro:

    Charla esLibre

    Es obvio que hemos elegido Discourse, el mejor software para mantener foros de discusión hoy día. Y además es software libre. Mi agradecimiento a todo su equipo de desarrollo por el maravilloso producto que han creado.

    Además están habilitados grupos de charla en Telegram (https://t.me/esLibre) y en Matrix (#esLibre:matrix.org). Ambos grupos están unidos a través de una pasarela Telegram <-> Matrix.

    Gracias a los compis que se han encargado de preparar todos los servicios. Sed bienvenidos.

    29 de June de 2019

    Now I have a web Solid pod

    I’ve just created my Solid pod: https://olea.solid.community/.

    Tim Berners-Lee proposes Solid as a way to implement his original vision for the World Wide Web. If timbl says something like this then I’m interested:

    Within the Solid ecosystem, you decide where you store your data. Photos you take, comments you write, contacts in your address book, calendar events, how many miles you run each day from your fitness tracker… they’re all stored in your Solid POD. This Solid POD can be in your house or workplace, or with an online Solid POD provider of your choice. Since you own your data, you’re free to move it at any time, without interruption of service.

    More details are at https://solid.inrupt.com/how-it-works.

    I’ve poked just a bit about what Solid can do. Don’t have many time to do now. It’s nice to check how it’s based on linked data, so the potential applications are infinite. And they have a forum too (running Discourse, ♥).

    My IT personal strategy requires to implement my own services as much as I can. Solid has a server implementation available I would like to use somewhere in the future.

    Love to see the Semantic Web coming back.

    28 de June de 2019

    Publicado el vídeo de la charla de presentación de 29110_EPF_library en esLibre 2019

    29110_EPF_library

    Tengo la enorme satisfacción de anunciar que está publicado el vídeo de la conferencia que impartí en esLibre 2019. Todo el mérito es de César García (elsatch) y está publicado en su canal la Hora Maker. Me hace muchísima ilusión y el resultado creo que queda bastante bien. Gracias César.

    Las transparencias de la conferencia también están disponibles en esta web.

    18 de June de 2019

    esLibre 2019: Wikitatón de patrimonio inmueble histórico de Andalucía

    congreso esLibre

    En la última entrada ya mencioné que impartiré un par de sesiones en el congreso esLibre del próximo viernes en Granada. Esta entrada está dedicada al al taller práctico Wikitatón de patrimonio inmueble histórico de Andalucía: de Andalucía para España y la Humanidad sencillamente para incluir un listado de enlaces y materiales de interés para el taller. La información es muy esquemática porque sólo está pensado ser usada en ese taller. Aquí va.

    Referencias oficiales

    Principales servicios Wikimedia de nuestro interés

    Material relacionado en los proyectos Wikimedia:

    Consultas SPARQL a Wikidata relacionadas:

    Otros servicios externos de interés:

    Ejemplos de monumentos

    Usaremos unos ejemplos como material de referencia. Es muy relevante el de la Alhambra porque es la entrada de la guía de Ándalucía con más datos de todo el catálogo, con mucha ventaja.

    Alhambra de Granada

    Puente del Hacho

    Estación de Renfe de Almería

    10 de June de 2019

    Participación en el congreso esLibre 2019

    congreso esLibre

    Hace ya un tiempo que conté que había mandado varias propuestas de actividades para el congreso esLibre, donde volveremos a encontrarnos, entre otros, con los viejos amigos de la época de HispaLinux y, espero, con muchísima gente nueva. Finalmente he descartado una de ellas porque el encuentro dura un sólo día y también quería poder asistir a otras charlas y, sobre todo, alternar con los amigotes.

    Estas son las dos:

    29110_EPF_library

    Como ya dije anteriormente estoy muy ilusionado por presentar los trabajos previos de 29110_EPF_library en el que está centrado el TFG que finalmente presentaré en septiembre, justo antes de la LibreOffice Conference que celebraremos en Almería. Me va a ser muy útil para estructurar cómo mejor comunicar los hallazgos del proyecto y con un poco de suerte recibiré alguna realimentación de utilidad para la memoria final.

    Sobre el wikitatón: no vamos a tener mucho tiempo disponible para subir muchos resultados. Si ya estás familiarizado con Wikidata será perfecto y para los demás intentaré reducir la barrera de entrada todo lo posible. No tengo pensado preparar mucho más material que algunos enlaces de referencia, incluyendo alguna consulta al fantástico servicio de consultas SPARQL https://query.wikidata.org. Será una sesión muy interactiva y con un poco de suerte estaremos más de uno para echar una mano a los nuevos. Recordad que es MUY IMPORTANTE traer vuestro propio ordenador. Y si estás familiarizado con «linked-data» y en particular con JSON-LD no dejes de venir, porque podemos necesitar tu ayuda ;-)

    El programa del congreso ya está publicado y sólo variará en algún ajuste menor: https://eslib.re/2019/programa/

    programa del congreso esLibre

    Para los interesados: tenemos un grupo Telegram Hispalinustálgicos al que estáis todos invitadísmos.

    Espero que os animéis a venir a Granada. Al margen de los contenidos lo mejor es la audiencia que convoca: la flor y nata del linuxerío español. Y por supuesto la propia ciudad de Granada:

    «Dale limosna mujer
    que no hay en la vida nada
    como la pena de ser
    ciego en Granada.
    »

    25 de May de 2019

    Eliminar mensajes antiguos en Gmail

    Apunto aquí una receta rápida para que no se me olvide y por si fuera de interés para otras personas.

    El problema: uso Gmail desde hace muuuchos años y hasta hoy no me había planteado hacer limpieza. Pero, estaba al 93% de ocupación: 17.74 GB usados de 19GB disponibles, así que me he liado la manta a la cabeza y lo he podido bajar un poco, a 14.34GB = 75%, eliminando todos los mensajes de más de 10 años.

    93% de utilización = sí, soy un Diógenes del correo

    La solución: básicamente, usar este script en Python. Le he metido los import necesarios y lo he dejado en un Gist. Para que el script funcione, hay que activar IMAP en Gmail y (esto es importante), el acceso a aplicaciones «menos seguras» desde las opciones de seguridad de tu cuenta Google.

    Activa Less secure app access.

    Si tienes 2-Step Verification, tendrás que desactivarlo momentáneamente. Cuando termines de ejecutar el script, recuerda volver a activarlo (y desactiva el Less secure app access).

    75% de ocupación, mucho mejor 🙂

    05 de May de 2019

    Intellectual property registries

    Past Monday we Carlos J. Vives and me gave a talk about Creative Commons and open content in a local education center:

    The talk is part of the ccALM Almería Creative Commons Festival.

    The only goal of this entry is to collect some links to registration services for IP useful for any digital creator in Internet, particularly for open culture works. As far I remember:

    In the past there were this Digital Media Rights service, but seems broken now: http://dmrights.com/

    Limited to Spain there is two public managed services:

    Some of this services are thought to be used as a legal resource in case of litigation. Others are just an historical record for websites. If you want to use any of it study carefully their features and advantages of your interest.

    31 de January de 2019

    A mutter and gnome-shell update

    Some personal highlights:

    Emoji OSK

    The reworked OSK was featured a couple of cycles ago, but a notable thing that was still missing from the design reference was emoji input.

    No more, sitting in a branch as of yet:

    This UI feeds from the same emoji list than GtkEmojiChooser, and applies the same categorization/grouping, all the additional variants to an emoji are available as a popover. There’s also a (less catchy) keypad UI in place, ultimately hooked to applications through the GtkInputPurpose.

    I do expect this to be in place for 3.32 for the Wayland session.

    X11 vs Wayland

    Ever since the wayland work started on mutter, there’s been ideas and talks about how mutter “core” should become detached of X11 code. It has been a long and slow process, every design decision has been directed towards this goal, we leaped forward on 2017 GSOC, and eg. Georges sums up some of his own recent work in this area.

    For me it started with a “Hey, I think we are not that far off” comment in #gnome-shell earlier this cycle. Famous last words. After rewriting several, many, seemingly unrelated subsystems, and shuffling things here and there, and there we are to a point where gnome-shell might run with --no-x11 set. A little push more and we will be able to launch mutter as a pure wayland compositor that just spawns Xwayland on demand.

    What’s after that? It’s certainly an important milestone but by no means we are done here. Also, gnome-settings-daemon consists for the most part X11 clients, which spoils the fun by requiring Xwayland very early in a real session, guess what’s next!

    At the moment about 80% of the patches have been merged. I cannot assure at this point will all be in place for 3.32, but 3.34 most surely. But here’s a small yet extreme proof of work:

    Performance

    It’s been nice to see some of the performance improvements I did last cycle being finally merged. Some notable ones, like that one that stopped triggering full surface redraws on every surface invalidation. Also managed to get some blocking operations out of the main loop, which should fix many of the seemingly random stalls some people were seeing.

    Those are already in 3.31.x, with many other nice fixes in this area from Georges, Daniel Van Vugt et al.

    Fosdem

    As a minor note, I will be attending Fosdem and the GTK+ Hackfest happening right after. Feel free to say hi or find Wally, whatever comes first.

    29 de January de 2019

    Working on the Chromium Servicification Project

    Igalia & ChromiumIt’s been a few months already since I (re)joined Igalia as part of its Chromium team and I couldn’t be happier about it: right since the very first day, I felt perfectly integrated as part of the team that I’d be part of and quickly started making my way through the -fully upstream- project that would keep me busy during the following months: the Chromium Servicification Project.

    But what is this “Chromium servicification project“? Well, according to the Wiktionary the word “servicification” means, applied to computing, “the migration from monolithic legacy applications to service-based components and solutions”, which is exactly what this project is about: as described in the Chromium servicification project’s website, the whole purpose behind this idea is “to migrate the code base to a more modular, service-oriented architecture”, in order to “produce reusable and decoupled components while also reducing duplication”.

    Doing so would not only make Chromium a more manageable project from a source code-related point of view and create better and more stable interfaces to embed chromium from different projects, but should also enable teams to experiment with new features by combining these services in different ways, as well as to ship different products based in Chromium without having to bundle the whole world just to provide a particular set of features. 

    For instance, as Camille Lamy put it in the talk delivered (slides here) during the latest Web Engines Hackfest,  “it might be interesting long term that the user only downloads the bits of the app they need so, for instance, if you have a very low-end phone, support for VR is probably not very useful for you”. This is of course not the current status of things yet (right now everything is bundled into a big executable), but it’s still a good way to visualise where this idea of moving to a services-oriented architecture should take us in the long run.

    Chromium Servicification Layers

    With this in mind, the idea behind this project would be to work on the migration of the different parts of Chromium depending on those components that are being converted into services, which would be part of a “foundation” base layer providing the core services that any application, framework or runtime build on top of chromium would need.

    As you can imagine, the whole idea of refactoring such an enormous code base like Chromium’s is daunting and a lot of work, especially considering that currently ongoing efforts can’t simply be stopped just to perform this migration, and that is where our focus is currently aimed at: we integrate with different teams from the Chromium project working on the migration of those components into services, and we make sure that the clients of their old APIs move away from them and use the new services’ APIs instead, while keeping everything running normally in the meantime.

    At the beginning, we started working on the migration to the Network Service (which allows to run Chromium’s network stack even without a browser) and managed to get it shipped in Chromium Beta by early October already, which was a pretty big deal as far as I understand. In my particular case, that stage was a very short ride since such migration was nearly done by the time I joined Igalia, but still something worth mentioning due to the impact it had in the project, for extra context.

    After that, our team started working on the migration of the Identity service, where the main idea is to encapsulate the functionality of accessing the user’s identities right through this service, so that one day this logic can be run outside of the browser process. One interesting bit about this migration is that this particular functionality (largely implemented inside the sign-in component) has historically been located quite high up in the stack, and yet it’s now being pushed all the way down into that “foundation” base layer, as a core service. That’s probably one of the factors contributing to making this migration quite complicated, but everyone involved is being very dedicated and has been very helpful so far, so I’m confident we’ll get there in a reasonable time frame.

    If you’re curious enough, though, you can check this status report for the Identity service, where you can see the evolution of this particular migration, along with the impact our team had since we started working on this part, back on early October. There are more reports and more information in the mailing list for the Identity service, so feel free to check it out and/or subscribe there if you like.

    One clarification is needed, tough: for now, the scope of this migrations is focused on using the public C++ APIs that such services expose (see //services/<service_name>/public/cpp), but in the long run the idea is that those services will also provide Mojo interfaces. That will enable using their functionality regardless of whether you’re running those services as part of the browser’s process, or inside their own & separate processes, which will then allow the flexibility that chromium will need to run smoothly and safely in different kind of environments, from the least constrained ones to others with a less favourable set of resources at their disposal.

    And this is it for now, I think. I was really looking forward to writing a status update about what I’ve been up to in the past months and here it is, even though it’s not the shortest of all reports.

    FOSDEM 2019

    One last thing, though: as usual, I’m going to FOSDEM this year as well, along with a bunch of colleagues & friends from Igalia, so please feel free to drop me/us a line if you want to chat and/or hangout, either to talk about work-related matters or anything else really.

    And, of course, I’d be also more than happy to talk about any of the open job positions at Igalia, should you consider applying. There are quite a few of them available at the moment for all kind of things (most of them available for remote work): from more technical roles such as graphicscompilersmultimedia, JavaScript engines, browsers (WebKitChromium, Web Platform) or systems administration (this one not available for remotes, though), to other less “hands-on” types of roles like developer advocatesales engineer or project manager, so it’s possible there’s something interesting for you if you’re considering to join such an special company like this one.

    See you in FOSDEM!

    08 de January de 2019

    Epiphany automation mode

    Last week I finally found some time to add the automation mode to Epiphany, that allows to run automated tests using WebDriver. It’s important to note that the automation mode is not expected to be used by users or applications to control the browser remotely, but only by WebDriver automated tests. For that reason, the automation mode is incompatible with a primary user profile. There are a few other things affected by the auotmation mode:

    • There’s no persistency. A private profile is created in tmp and only ephemeral web contexts are used.
    • URL entry is not editable, since users are not expected to interact with the browser.
    • An info bar is shown to notify the user that the browser is being controlled by automation.
    • The window decoration is orange to make it even clearer that the browser is running in automation mode.

    So, how can I write tests to be run in Epiphany? First, you need to install a recently enough selenium. For now, only the python API is supported. Selenium doesn’t have an Epiphany driver, but the WebKitGTK driver can be used with any WebKitGTK+ based browser, by providing the browser information as part of session capabilities.

    from selenium import webdriver
    
    options = webdriver.WebKitGTKOptions()
    options.binary_location = 'epiphany'
    options.add_argument('--automation-mode')
    options.set_capability('browserName', 'Epiphany')
    options.set_capability('version', '3.31.4')
    
    ephy = webdriver.WebKitGTK(options=options, desired_capabilities={})
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    This is a very simple example that just opens Epiphany in automation mode, loads http://www.webkitgtk.org and closes Epiphany. A few comments about the example:

    • Version 3.31.4 will be the first one including the automation mode.
    • The parameter desired_capabilities shouldn’t be needed, but there’s a bug in selenium that has been fixed very recently.
    • WebKitGTKOptions.set_capability was added in selenium 3.14, if you have an older version you can use the following snippet instead
    from selenium import webdriver
    
    options = webdriver.WebKitGTKOptions()
    options.binary_location = 'epiphany'
    options.add_argument('--automation-mode')
    capabilities = options.to_capabilities()
    capabilities['browserName'] = 'Epiphany'
    capabilities['version'] = '3.31.4'
    
    ephy = webdriver.WebKitGTK(desired_capabilities=capabilities)
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    To simplify the driver instantation you can create your own Epiphany driver derived from the WebKitGTK one:

    from selenium import webdriver
    
    class Epiphany(webdriver.WebKitGTK):
        def __init__(self):
            options = webdriver.WebKitGTKOptions()
            options.binary_location = 'epiphany'
            options.add_argument('--automation-mode')
            options.set_capability('browserName', 'Epiphany')
            options.set_capability('version', '3.31.4')
    
            webdriver.WebKitGTK.__init__(self, options=options, desired_capabilities={})
    
    ephy = Epiphany()
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    The same for selenium < 3.14

    from selenium import webdriver
    
    class Epiphany(webdriver.WebKitGTK):
        def __init__(self):
            options = webdriver.WebKitGTKOptions()
            options.binary_location = 'epiphany'
            options.add_argument('--automation-mode')
            capabilities = options.to_capabilities()
            capabilities['browserName'] = 'Epiphany'
            capabilities['version'] = '3.31.4'
    
            webdriver.WebKitGTK.__init__(self, desired_capabilities=capabilities)
    
    ephy = Epiphany()
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    21 de December de 2018

    Importar JSON en MySQL usando MySQL Shell

    La utilidad MySQL Shell nos permite importar un fichero JSON en una tabla o colección de MySQL.

    Primero debemos activar el protocolo mysqlX :

    $ mysqlsh -u root -h localhost --mysql --dba enableXProtocol
    Please provide the password for 'root@localhost':
    Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No):
    enableXProtocol: Installing plugin mysqlx…
    enableXProtocol: done

    Y ahora ya podemos conectar con el servidor MySQL usando MySQLShell (y el protocolo mysqlX) :

    $ mysqlsh -u root -h localhost --mysqlx

    Tengo creada una base de datos llamada addi, vacía, y quiero importar ahí el fichero result.json en una colección de nombre addi_collection.

    El comando a ejecutar sería :

    MySQL Shell > util.importJson("result.json", {schema: "addi", collection: "addi_collection"});
    Importing from file "result.json" to collection <code>addi</code>.<code>addi_collection</code> in MySQL Server at localhost:33060

    El problema que tuve es que mi fichero json no tenía un campo _id único en cada registro (ver post anterior de ikasten.io), así que tuve que crearlo. Esto no sería un problema en MySQL Server > 8.0, pero estoy usando un server viejuno (5.7.19), así que obtuve este error:

    Processed 182.22 KB in 80 documents in 0.0340 sec (2.35K documents/s)
    Total successfully imported documents 0 (0.00 documents/s)
    Document is missing a required field (MySQL Error 5115)

    Tras añadir el campo _id a todos los registros, pude importar sin problemas:

    util.importJson("result.json", {schema: "addi", collection: "addi_collection"});
    Importing from file "result.json" to collection <code>addi</code>.<code>addi_collection</code> in MySQL Server at localhost:33060
    .. 80.. 80
     Processed 182.93 KB in 80 documents in 0.0379 sec (2.11K documents/s)
     Total successfully imported documents 80 (2.11K documents/s)

    Más info sobre JSON import utility en MySQL Shell.

    El resultado de la importación se guarda en una colección que recuerda a las colecciones de MongoDB

    20 de December de 2018

    Buscar y reemplazar con valores incrementales en Vim

    Supongamos que tenemos un fichero JSON como el siguiente:

    { "clave1" : "valor11", "clave2": "valor12", … }
    { "clave1" : "valor21", "clave2": "valor22", … }
    …
    { "clave1" : "valorN1", "clave2": "valorN2", … }

    y queremos añadir un campo nuevo al comienzo, con un _id incremental, para que quede así:

    { "_id" : 1, "clave1" : "valor11", "clave2": "valor12", … }
    { "_id" : 2, "clave1" : "valor21", "clave2": "valor22", … }
    …
    { "_id" : n, "clave1" : "valorN1", "clave2": "valorN2", … }

    En Vim podremos hacerlo definiendo una función:

    :let g:incr = 0 
    :function Incr() 
    :let g:incr = g:incr + 1 
    :return g:incr   
    :endfu

    Una vez definida la función Incr(), podremos invocarla en una orden find&replace con el operador \= que permite evaluar expresiones y hacer la sustitución que buscamos:

    Es decir:

    :%s/^{/\="{\"_id\":" . Incr() . ","/gc

    :%s/cadena_a_buscar/cadena_sustituta/gc

    Cadena a buscar: ^{ (que empiece por {)
    Cadena sustituta: =«{\»_id\»:» . Incr() . «,» (es decir, evaluar la expresión «_id\»:» . Incr() . «,», que inicialmente será «_id»:1 )
    /gc : Cambios globales (a todo el documento, no sólo la primera aparición) y con confirmación (puedes pulsar la tecla «a» (all) cuando veas que los cambios son correctos tras las primeras sustituciones)

    Si quieres más info sobre funciones y el lenguaje VimScript, échale un vistazo a este tutorial.

    15 de December de 2018

    Desactivar Command+c en VirtualBox para macOS

    Un tip rápido que me tenía intrigado desde hace tiempo. Si usas VirtualBox en macOS, seguro que al tener una máquina virtual lanzada has pulsado sin querer Command+c (⌘+c) para copiar texto (la combinación por defecto en macOS) en lugar de Ctrl+C (la combinación por defecto en Linux y Windows). El problema es que en VirtualBox la combinación Command+C escala el tamaño de la pantalla (¡y la hace minúscula!). Para desactivar este molesto comportamiento, basta con entrar en las preferencias de VirtualBox (pulsa ⌘ + ,), pestaña Input, pestaña VirtualMachine, pulsa sobre ScaledMode y elimina el dichoso shortcut.

    ¡ Adiós ⌘+C !

    25 de November de 2018

    Frogr 1.5 released

    It’s almost one year later and, despite the acquisition by SmugMug a few months ago and the predictions from some people that it would mean me stopping from using Flickr & maintaining Frogr, here comes the new release of frogr 1.5.Frogr 1.5 screenshot

    Not many changes this time, but some of them hopefully still useful for some people, such as the empty initial state that is now shown when you don’t have any pictures, as requested a while ago already by Nick Richards (thanks Nick!), or the removal of the applications menu from the shell’s top panel (now integrated in the hamburger menu), in line with the “App Menu Retirement” initiative.

    Then there were some fixes here and there as usual, and quite so many updates to the translations this time, including a brand new translation to Icelandic! (thanks Sveinn).

    So this is it this time, I’m afraid. Sorry there’s not much to report and sorry as well for the long time that took me to do this release, but this past year has been pretty busy between hectic work at Endless the first time of the year, a whole international relocation with my family to move back to Spain during the summer and me getting back to work at Igalia as part of the Chromium team, where I’m currently pretty busy working on the Chromium Servicification project (which is material for a completely different blog post of course).

    Anyway, last but not least, feel free to grab frogr from the usual places as outlined in its main website, among which I’d recommend the Flatpak method, either via GNOME Software  or from the command line by just doing this:

    flatpak install --from \
        https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

    For more information just check the main website, which I also updated to this latest release, and don’t hesitate to reach out if you have any questions or comments.

    Hope you enjoy it. Thanks!

    15 de November de 2018

    On the track for 3.32

    It happens sneakily, but there’s more things going on in the Tracker front than the occasional fallout. Yesterday 2.2.0-alpha1 was released, containing some notable changes.

    On and off during the last year, I’ve been working on a massive rework of the SPARQL parser. The current parser was fairly solid, but hard to extend for some of the syntax in the SPARQL 1.1 spec. After multiple attempts and failures at implementing property paths, I convinced myself this was the way forward.

    The main difference is that the previous parser was more of a serializer to SQL, just minimal state was preserved across the operation. The new parser does construct an expression tree so that nodes may be shuffled/reevaluated. This allows some sweet things:

    • Property paths are a nice resource to write more idiomatic SPARQL, most property path operators are within reach now. There’s currently support for sequence paths:

      # Get all files in my homedir
      SELECT ?elem {
        ?elem nfo:belongsToContainer/nie:url 'file:///home/carlos'
      }
      


      And inverse paths:

      # Get all files in my homedir by inverting
      # the child to container relation
      SELECT ?elem {
        ?homedir nie:url 'file:///home/carlos' ;
                 ^nfo:belongsToContainer ?elem
      }
      

      There’s harder ones like + and * that will require recursive selects, and there’s the negation (!) operator which is not possible to implement yet.

    • We now have prepared statements! A TrackerSparqlStatement object was introduced, capable of holding a query with parameters which can be set/replaced prior to execution.

      conn = tracker_sparql_connection_get (NULL, NULL);
      stmt = tracker_sparql_connection_query_statement (conn,
                                                        "SELECT ?u { ?u fts:match ~term }",
                                                        NULL, NULL);
      
      tracker_sparql_statement_bind_string (stmt, "term", search_term);
      cursor = tracker_sparql_statement_execute (stmt, NULL, NULL);
      

      This is a long sought protection for injections. The object is cacheable and can service multiple cursors asynchronously, so it will also be an improvement for frequent queries.

    • More concise SQL is generated at places, which brings slight improvements on SQLite query planning.

    This also got the ideas churning towards future plans, the trend being a generic triple store as much sparql1.1 capable as possible. There’s also some ideas about better data isolation for Flatpak and sandboxes in general (seeing the currently supported approach didn’t catch on). Those will eventually happen in this or following cycles, but I’ll reserve that for other blog post.

    An eye was kept on memory usage too (mostly unrealized ideas from the performance hackfest earlier this year), tracker-store has been made to automatically shutdown when unneeded (ideally most of the time, since it just takes care of updates and the unruly apps that use the bus connection), and tracker-miner-fs took over the functionality of tracker-miner-apps. That’s 2 processes less in your default session.

    In general, we’re on the way to an exciting release, and there’s more to come!

    25 de October de 2018

    3 events in a month

    As part of my job at Igalia, I have been attending 2-3 events per year. My role mostly as a Chromium stack engineer is not usually much demanding regarding conference trips, but they are quite important as an opportunity to meet collaborators and project mates.

    This month has been a bit different, as I ended up visiting Santa Clara LG Silicon Valley Lab in California, Igalia headquarters in A Coruña, and Dresden. It was mostly because I got involved in the discussions for the web runtime implementation being developed by Igalia for AGL.

    AGL f2f at LGSVL

    It is always great to visit LG Silicon Valley Lab (Santa Clara, US), where my team is located. I have been participating for 6 years in the development of the webOS web stack you can most prominently enjoy in LG webOS smart TV.

    One of the goals for next months at AGL is providing an efficient web runtime. In LGSVL we have been developing and maintaining WAM, the webOS web runtime. And as it was released with an open source license in webOS Open Source Edition, it looked like a great match for AGL. So my team did a proof of concept in May and it was succesful. At the same time Igalia has been working on porting Chromium browser to AGL. So, after some discussions AGL approved sponsoring my company, Igalia for porting the LG webOS web runtime to AGL.

    As LGSVL was hosting the september 2018 AGL f2f meeting, Igalia sponsored my trip to the event.

    AGL f2f Santa Clara 2018, AGL wiki CC BY 4.0

    So we took the opportunity to continue discussions and progress in the development of the WAM AGL port. And, as we expected, it was quite beneficial to unblock tasks like AGL app framework security integration, and the support of AGL latest official release, Funky Flounder. Julie Kim from Igalia attended the event too, and presented an update on the progress of the Ozone Wayland port.

    The organization and the venue were great. Thanks to LGSVL!

    Web Engines Hackfest 2018 at Igalia

    Next trip was definitely closer. Just 90 minutes drive to our Igalia headquarters in A Coruña.


    Igalia has been organizing this event since 2009. It is a cross-web-engine event, where engineers of Mozilla, Chromium and WebKit have been meeting yearly to do some hacking, and discuss the future of the web.

    This time my main interest was participating in the discussions about the effort by Igalia and Google to support Wayland natively in Chromium. I was pleased to know around 90% of the work had already landed in upstream Chromium. Great news as it will smooth integration of Chromium for embedders using Ozone Wayland, like webOS. It was also great to know the work for improving GPU performance reducing the number of copies required for painting web contents.

    Web Engines Hackfest 2018 CC BY-SA 2.0

    Other topics of my interest:
    – We did a follow-up of the discussion in last BlinkOn about the barriers for Chromium embedders, sharing the experiences maintaining a downstream Chromium tree.
    – Joined the discussions about the future of WebKitGTK. In particular the graphics pipeline adaptation to the upcoming GTK+ 4.

    As usual, the organization was great. We had 70 people in the event, and it was awesome to see all the activity in the office, and so many talented engineers in the same place. Thanks Igalia!

    Web Engines Hackfest 2018 CC BY-SA 2.0

    AGL All Members Meeting Europe 2018 at Dresden

    The last event in barely a month was my first visit to the beautiful town of Dresden (Germany).

    The goal was continuing the discussions for the projects Igalia is developing for AGL platform: Chromium upstream native Wayland support, and the WAM web runtime port. We also had a booth showcasing that work, but also our lightweight WebKit port WPE that was, as usual, attracting interest with its 60fps video playback performance in a Raspberry Pi 2.

    I co-presented with Steve Lemke a talk about the automotive activities at LGSVL, taking the opportunity to update on the status of the WAM web runtime work for AGL (slides here). The project is progressing and Igalia should be landing soon the first results of the work.

    Igalia booth at AGL AMM Europe 2018

    It was great to meet all this people, and discuss in person the architecture proposal for the web runtime, unblocking several tasks and offering more detailed planning for next months.

    Dresden was great, and I can’t help highlighting the reception and guided tour in the Dresden Transportation Museum. Great choice by the organization. Thanks to Linux Foundation and the AGL project community!

    Next: Chrome Dev Summit 2018

    So… what’s next? I will be visiting San Francisco in November for Chrome Dev Summit.

    I can only thank Igalia for sponsoring my attendance to these events. They are quite important for keeping things moving forward. But also, it is also really nice to meet friends and collaborators. Thanks Igalia!

    03 de August de 2018

    On Moving

    Winds of Change. One of my favourite songs ever and one that comes to my mind now that me and my family are going through quite some important changes, once again. But let’s start from the beginning…

    A few years ago, back in January 2013, my family and me moved to the UK as the result of my decision to leave Igalia after almost 7 years in the company to embark ourselves in the “adventure” or living abroad. This was an idea we had been thinking about for a while already at that time, and our current situation back then suggested that it could be the right moment to try it out… so we did.

    It was kind of a long process though: I first arrived alone in January to make sure I would have time to figure things out and find a permanent place for us to live in, and then my family joined me later in May, once everything was ready. Not great, if you ask me, to be living separated from your loved ones for 4 full months, not to mention the juggling my wife had to do during that time to combine her job with looking after the kids mostly on her own… but we managed to see each other every 2-3 weekends thanks to the London – Coruña direct flights in the meantime, so at least it was bearable from that point of view.

    But despite of those not so great (yet expected) beginnings, I have to say that this past 5+ years have been an incredible experience overall, and we don’t have a single regret about making the decision to move, maybe just a few minor and punctual things only if I’m completely honest, but that’s about it. For instance, it’s been just beyond incredible and satisfying to see my kids develop their English skills “from zero to hero”, settle at their school, make new friends and, in one word, evolve during these past years. And that alone would have been a good reason to justify the move already, but it turns out we also have plenty of other reasons as we all have evolved and enjoyed the ride quite a lot as well, made many new friends, knew many new places, worked on different things… a truly enriching experience indeed!

    In a way, I confess that this could easily be one of those things we’d probably have never done if we knew in advance of all the things we’d have to do and go through along the way, so I’m very grateful for that naive ignorance, since that’s probably how we found the courage, energy and time to do it. And looking backwards, it seems clear to me that it was the right time to do it.

    But now it’s 2018 and, even though we had such a great time here both from personal and work-related perspectives, we have decided that it’s time for us to come back to Galicia (Spain), and try to continue our vital journey right from there, in our homeland.

    And before you ask… no, this is not because of Brexit. I recognize that the result of the referendum has been a “contributing factor” (we surely didn’t think as much about returning to Spain before that 23 of June, that’s true), but there were more factors contributing to that decision, which somehow have aligned all together to tell us, very clearly, that Now It’s The Time…

    For instance, we always knew that we would eventually move back for my wife to take over the family business, and also that we’d rather make the move in a way that it would be not too bad for our kids when it happened. And having a 6yo and a 9yo already it feels to us like now it’s the perfect time, since they’re already native English speakers (achievement unlocked!) and we believe that staying any longer would only make it harder for them, especially for my 9yo, because it’s never easy to leave your school, friends and place you call home behind when you’re a kid (and I know that very well, as I went through that painful experience precisely when I was 9).

    Besides that, I’ve also recently decided to leave Endless after 4 years in the company and so it looks like, once again, moving back home would fit nicely with that work-related change, for several reasons. Now, I don’t want to enter into much detail on why exactly I decided to leave Endless, so I think I’ll summarize it as me needing a change and a rest after these past years working on Endless OS, which has been an equally awesome and intense experience as you can imagine. If anything, I’d just want to be clear on that contributing to such a meaningful project surrounded by such a team of great human beings, was an experience I couldn’t be happier and prouder about, so you can be certain it was not an easy decision to make.

    Actually, quite the opposite: a pretty hard one I’d say… but a nice “side effect” of that decision, though, is that leaving at this precise moment would allow me to focus on the relocation in a more organized way as well as to spend some quality time with my family before leaving the UK. Besides, it will hopefully be also useful for us to have enough time, once in Spain, to re-organize our lives there, settle properly and even have some extra weeks of true holidays before the kids start school and we start working again in September.

    Now, taking a few weeks off and moving back home is very nice and all that, but we still need to have jobs, and this is where our relocation gets extra interesting as it seems that we’re moving home in multiple ways at once…

    For once, my wife will start taking over the family business with the help of her dad in her home town of Lalín (Pontevedra), where we plan to be living for the foreseeable future. This is the place where she grew up and where her family and many friends live in, but also a place she hasn’t lived in for the last 15 years, so the fact that we’ll be relocating there is already quite a thing in the “moving back home” department for her…

    Second, for my kids this will mean going back to having their relatives nearby once again as well as friends they only could see and play with during holidays until now, which I think it’s a very good thing for them. Of course, this doesn’t feel as much moving home for them as it does for us, since they obviously consider the UK their home for now, but our hope is that it will be ok in the medium-long term, even though it will likely be a bit challenging for them at the beginning.

    Last, I’ll be moving back to work at Igalia after almost 6 years since I left which, as you might imagine, feels to me very much like “moving back home” too: I’ll be going back to working in a place I’ve always loved so much for multiple reasons, surrounded by people I know and who I consider friends already (I even would call some of them “best friends”) and with its foundations set on important principles and values that still matter very much to me, both from technical (e.g. Open Source, Free Software) and not so technical (e.g. flat structure, independence) points of view.

    Those who know me better might very well think that I’ve never really moved on as I hinted in the title of the blog post I wrote years ago, and in some way that’s perhaps not entirely wrong, since it’s no secret I always kept in touch throughout these past years at many levels and that I always felt enormously proud of my time as an Igalian. Emmanuele even told me that I sometimes enter what he seems to call an “Igalia mode” when I speak of my past time in there, as if I was still there… Of course, I haven’t seen any formal evidence of such thing happening yet, but it certainly does sound like a possibility as it’s true I easily get carried away when Igalia comes to my mind, maybe as a mix of nostalgia, pride, good memories… those sort of things. I suppose he’s got a point after all…

    So, I guess it’s only natural that I finally decided to apply again since, even though both the company and me have evolved quite a bit during these years, the core foundations and principles it’s based upon remain the same, and I still very much align with them. But applying was only one part, so I couldn’t finish this blog post without stating how grateful I am for having been granted this second opportunity to join Igalia once again because, being honest, more often than less I was worried on whether I would be “good enough” for the Igalia of 2018. And the truth is that I won’t know for real until I actually start working and stay in the company for a while, but knowing that both my former colleagues and newer Igalians who joined since I left trust me enough to join is all I need for now, and I couldn’t be more excited nor happier about it.

    Anyway, this post is already too long and I think I’ve covered everything I wanted to mention On Moving (pun intended with my post from 2012, thanks Will Thompson for the idea!), so I think I’ll stop right here and re-focus on the latest bits related to the relocation before we effectively leave the UK for good, now that we finally left our rented house and put all our stuff in a removals van. After that, I expect a few days of crazy unpacking and bureaucracy to properly settle in Galicia and then hopefully a few weeks to rest and get our batteries recharged for our new adventure, starting soon in September (yet not too soon!).

    As usual, we have no clue of how future will be, but we have a good feeling about this thing of moving back home in multiple ways, so I believe we’ll be fine as long as we stick together as a family as we always did so far.

    But in any case, please wish us good luck.That’s always welcome! :-)

    01 de August de 2018

    HackIt, SolveIt and SmashCTF (III) – HTML5 DRM – Conflicto ideológico


    DRM y HTML5. EME (Encrypted Media Extensions). Hay que empaparse algo sobre estos temas para resolver el nivel. EME ofrece un API que permite a las aplicaciones web interactuar con sistemas de protección de contenido para poder reproducir audio o video cifrado. El famoso DRM en HTML5, algo que muchos consideran una aberración (la web nació para ser abierta, no para ofrecer contenidos cerrados). Pero… ahí está el API. Y es precisamente lo que hay que intentar resolver. Básicamente el cliente tiene una etiqueta video. Al pulsar el play se visualizan 26 segundos. Pero a partir de ahí, todo está negro. Parece que el video webm está protegido. En el código vemos que en un momento dado se hace una petición de licencia a un servidor license, que nos envía la clave para desproteger el webm.

    Pero esa petición sólo se puede hacer si rellenamos los bytes que faltan… esos bytes forman parte de la solución al sudoku que nos han puesto debajo del vídeo. ¿Qué hacer cuando tengamos la clave de desprotección del vídeo? Visualizarlo en el navegador 🙂 ¿Y después? Bueno, eso lo veremos enseguida… Vayamos por partes. Lo primero es solucionar el sudoku. Lo siguiente es automatizar el proceso de meter los números en las casillas del sudoku (hacerlo a mano es un infierno).
    Solucionar el sudoku es fácil. Entramos en sudoku-solutions.com, metemos los datos y pulsamos en check…
    Vaya, tiene 9 soluciones posibles. No podía ser tan fácil …

    Para no perder tiempo tecleando cada una de ellas, podemos automatizar el proceso. Abrimos la consola JavaScript y tecleamos:

    var sudoku = $("#su input")
    var s ="852931647473862159961547283318476925549328761726159834637294518194685372285713496"
    for (var i = 0; i < sudoku.length; i++){ sudoku[i].value = s.charAt(i); }
    $("video")[0].needs_reload = 1;

    Por cierto, en ese código ya va la solución correcta 🙂 La última línea informa al navegador que el sudoku ha cambiado y debe leer sus datos. Bien, todo preparado. Pulsamos play y vemos que pasamos del segundo 26. Es un trailer de «Inception». Hay una serie de fotogramas que muestran pixels distorsionados. Seguramente porque se haya introducido por ahí algún string que no debería estar… Habrá que bajar el webm, descifrarlo y abrirlo más o menos por esa parte, para ver de qué string se trata.

    ¿Pero cómo obtenemos la clave de descodificación del webm? (el navegador la conoce, pero necesitamos aislarla…) ¿Por cierto, cuántas claves habrá? Vamos allá.

    Abrimos main.js y metemos un punto de ruptura en la línea 71

    }).then(function(e) {
                    var n = (e = new Uint8Array(e)).slice(0, 12);
                    return window.crypto.subtle.decrypt({
                        name: "AES-GCM",
                        iv: n,
                        tagLength: 128
                    }, r, e.slice(12))
                }).then(function(e) {
    breakpoint --->                return t.target.update(e)
                })

    En e tendremos la clave. Ojo, veremos que el breakpoint se ejecuta dos veces, y necesitaremos apuntar ambas claves (una es para cifrar el vídeo y otra para cifrar el audio). Creo recordar que no era «tan sencillo», sino que había que convertir el formato de las claves obtenidas en «e» con una línea como

    atob(String.fromCharCode.apply(null, new Uint8Array(e)))

    y a continuación extraer las claves de 16 bytes con un script como el siguiente (una de las claves era w-UHS…):

    var b64string = "w-UHS56ogAQacZLNj1TpqA" ;
    var buf = Buffer.from(b64string, 'base64');
    var fs = require('fs');
    fs.writeFile("key.txt", buf,  "binary",function(err) {
        if(err) {
            console.log(err);
        } else {
            console.log("The file was saved!");
        }
    });

    Momento de descargar el vídeo (cifrado) y descifrar. ¿Cómo desciframos? Bien, sabemos la clave y tenemos el vídeo cifrado. Nos falta saber cómo se cifró. Investigando un poco, nos encontramos con la utilidad webm_crypt de las webm-tools. Tiene una dependencia con libwebm, pero siguiendo las instrucciones de compilación del anterior enlace, lo podremos obtener sin problemas (en Linux, en macOS no iba).

    Desciframos con :

    $ webm_crypt -i input.webm -o decrypted.webm -decrypt -audio_options base_file=clave1 -video_options base_file=clave2

    Y por fin, podremos abrir el fichero decrypted.webm (por ejemplo, con vlc…)

    o con strings (!)

    $ strings -n 10 decrypted.webm

    (Nota: -n 10 = dame los strings ASCII de decrypted.webm que puedas visualizar, siempre y cuando esos strings sean de longitud mayor o igual a 10)

    Y analizando la salida de strings, veremos la clave para el siguiente nivel 🙂

    PD: creo que hay una herramienta que te permite pasar como input un vídeo webm y una marca de tiempo (hh:mm:ss) y te da como salida el contenido del fotograma de esa marca de tiempo. Lo cual te evitaría el uso de strings (o lo facilitaría). Pero eso lo dejo para que la gente de W0pr, navarparty o Barcelona92 nos lo cuenten en los comentarios.

    17 de May de 2018

    Performance hackfest

    Last evening I came back from the GNOME performance hackfest happening in Cambridge. There was plenty of activity, clear skies, and pub evenings. Here’s some incomplete and unordered items, just the ones I could do/remember/witness/talk/overhear:

    • Xwayland 1.20 seems to be a big battery saver. Christian Kellner noticed that X11 Firefox playing Youtube could take his laptop to >20W consumption, traced to fairly intensive GPU activity. One of the first things we did was trying master, which dropped power draw to 8-9W. We presumed this was due to the implementation of the Present extension.
    • I was looking into dropping the gnome-shell usage of AtspiEventListener for the OSK, It is really taxing on CPU usage (even if the events we want are a minuscule subset, gnome-shell will forever get all that D-Bus traffic, and a11y is massively verbose), plus it slowly but steadily leaks memory.

      For the other remaining path I started looking into at least being able to deinitialize it. The leak deserves investigation, but I thought my time could be better invested on other things than learning yet another codebase.

    • Jonas Ã…dahl and Christian Hergert worked towards having Mutter dump detailed per-frame information, and Sysprof able to visualize it. This is quite exciting as all classic options just let us know where do we spend time overall, but doesn’t let us know whether we missed the frame mark, nor why precisely would that be. Update: I’ve been pointed out that Eric Anholt also worked on GPU perf events in mesa/vc4, so this info could also be visualized through sysprof
    • Peter Robinson and Marco Trevisan run into some unexpected trouble when booting GNOME in an ARM board with no input devices whatsoever. I helped a bit with debugging and ideas, Marco did some patches to neatly handle this situation.
    • Hans de Goede did some nice progress towards having the GDM session consume as little as possible while switched away from it.
    • Some patch review went on, Jonas/Marco/me spent some time looking at a screen very close and discussing the mipmapping optimizations from Daniel Van Vugt.
    • I worked towards fixing the reported artifact from my patches to aggressively cache paint volumes. These are basically one-off cases where individual ClutterActors break the invariants that would make caching possible.
    • Christian Kellner picked up my idea of performing pointer picking purely on the CPU side when the stage purely consists of 2D actors, instead of using the usual GL approach of “repaint in distinctive colors, read pixel to perform hit detection” which is certainly necessary for 3D, but quite a big roundtrip for 2D.
    • Alberto Ruiz and Richard Hughes talked about how to improve gnome-software memory usage in the background.
    • Alberto and me briefly dabbled with the idea of having specific search provider API that were more tied to Tracker, in order to ease the many context switches triggered by overview search.
    • On the train ride back, I unstashed and continued work on a WIP tracker-miners patch to have tracker-extract able to shutdown on inactivity. One less daemon to have usually running.

    Overall, it was a nice and productive event. IMO having people with good knowledge both deep in the stack and wide in GNOME was determining, I hope we can repeat this feat again soon!

    06 de May de 2018

    Updating Endless OS to GNOME Shell 3.26 (Video)

    It’s been a pretty hectic time during the past months for me here at Endless, busy with updating our desktop to the latest stable version of GNOME Shell (3.26, at the time the process started), among other things. And in all this excitement, it seems like I forgot to blog so I think this time I’ll keep it short for once, and simply link to a video I made a couple of months ago, right when I was about to finish the first phase of the process (which ended up taking a bit longer than expected).

    Note that the production of this video is far from high quality (unsurprisingly), but the feedback I got so far is that it has been apparently very useful to explain to less technically inclined people what doing a rebase of this characteristics means, and with that in mind I woke up this morning realizing that it might be good to give it its own entry in my personal blog, so here it is.


    (Pro-tip: Enable video subtitles to see contextual info)

    Granted, this hasn’t been a task as daunting as The Great Rebase I was working on one year ago, but still pretty challenging for a different set of reasons that I might leave for a future, and more detailed, post.

    Hope you enjoy watching the video as much as I did making it.

    21 de March de 2018

    Updated Chromium Legacy Wayland Support

    Introduction

    Future Ozone Wayland backend is still not ready for shipping. So we are announcing the release of an updated Ozone Wayland backend for Chromium, based on the implementation provided by Intel. It is rebased on top of latest stable Chromium release and you can find it in my team Github. Hope you will appreciate it.

    Official Chromium on Linux desktop nowadays

    Linux desktop is progressively migrating to use Wayland as the display server. It is the default option in Fedora, Ubuntu ~~and, more importantly, the next Ubuntu Long Term Support release will ship Gnome Shell Wayland display server by default~~ (P.S. since this post was originally written, Ubuntu has delayed the Wayland adoption for LTS).

    As is, now, Chromium browser for Linux desktop support is based on X11. This means it will natively interact with an X server and with its XDG extensions for displaying the contents and receiving user events. But, as said, next generation of Linux desktop will be using Wayland display servers instead of X11. How is it working? Using XWayland server, a full X11 server built on top of Wayland protocol. Ok, but that has an impact on performance. Chromium needs to communicate and paint to X11 provided buffers, and then, those buffers need to be shared with Wayland display server. And the user events will need to be proxied from the Wayland display server through the XWayland server and X11 protocol. It requires more resources: more memory, CPU, and GPU. And it adds more latency to the communication.

    Ozone

    Chromium supports officially several platforms (Windows, Android, Linux desktop, iOS). But it provides abstractions for porting it to other platforms.

    The set of abstractions is named Ozone (more info here). It allows to implement one or more platform components with the hooks for properly integrating with a platform that is in the set of officially supported targets. Among other things it provides abstractions for:
    * Obtaining accelerated surfaces.
    * Creating and obtaining windows to paint the contents.
    * Interacting with the desktop cursor.
    * Receiving user events.
    * Interacting with the window manager.

    Chromium and Wayland (2014-2016)

    Even if Wayland was not used on Linux desktop, a bunch of embedded devices have been using Wayland for their display server for quite some time. LG has been shipping a full Wayland experience on the webOS TV products.

    In the last 4 years, Intel has been providing an implementation of Ozone abstractions for Wayland. It was an amazing work that allowed running Chromium browser on top of a Wayland compositor. This backend has been the de facto standard for running Chromium browser on all these Wayland-enabled embedded devices.

    But the development of this implementation has mostly stopped around Chromium 49 (though rebases on top of Chromium 51 and 53 have been provided).

    Chromium and Wayland (2018+)

    Since the end of 2016, Igalia has been involved on several initiatives to allow Chromium to run natively in Wayland. Even if this work is based on the original Ozone Wayland backend by Intel, it is mostly a rewrite and adaptation to the future graphics architecture in Chromium (Viz and Mus).

    This is being developed in the Igalia GitHub, downstream, though it is expected to be landed upstream progressively. Hopefully, at some point in 2018, this new backend will be fully ready for shipping products with it. But we are still not there. ~~Some major missing parts are Wayland TextInput protocol and content shell support~~ (P.S. since this was written, both TextInput and content shell support are working now!).

    More information on these posts from the authors:
    * June 2016: Understanding Chromium’s runtime ozone platform selection (by Antonio Gomes).
    * October 2016: Analysis of Ozone Wayland (by Frédéric Wang).
    * November 2016: Chromium, ozone, wayland and beyond (by Antonio Gomes).
    * December 2016: Chromium on R-Car M3 & AGL/Wayland (by Frédéric Wang).
    * February 2017: Mus Window System (by Frédéric Wang).
    * May 2017: Chromium Mus/Ozone update (H1/2017): wayland, x11 (by Antonio Gomes).
    * June 2017: Running Chromium m60 on R-Car M3 board & AGL/Wayland (by Maksim Sisov).

    Releasing legacy Ozone Wayland backend (2017-2018)

    Ok, so new Wayland backend is still not ready in some cases, and the old one is unmaintained. For that reason, LG is announcing the release of an updated legacy Ozone Wayland backend. It is essentially the original Intel backend, but ported to current Chromium stable.

    Why? Because we want to provide a migration path to the future Ozone Wayland backend. And because we want to share this effort with other developers, willing to run Chromium in Wayland immediately, or that are still using the old backend and cannot immediately migrate to the new one.

    WARNING If you are starting development for a product that is going to happen in 1-2 years… Very likely your best option is already migrating now to the new Ozone Wayland backend (and help with the missing bits). We will stop maintaining it ourselves once new Ozone Wayland backend lands upstream and covers all our needs.

    What does this port include?
    * Rebased on top of Chromium m60, m61, m62 and m63.
    * Ported to GN.
    * It already includes some changes to adapt to the new Ozone Wayland refactors.

    It is hosted at https://github.com/lgsvl/chromium-src.

    Enjoy it!

    Originally published at webOS Open Source Edition Blog. and licensed under Creative Commons Attribution 4.0.

    28 de December de 2017

    Frogr 1.4 released

    Another year goes by and, again, I feel the call to make one more release just before 2017 over, so here we are: frogr 1.4 is out!

    Screenshot of frogr 1.4

    Yes, I know what you’re thinking: “Who uses Flickr in 2017 anyway?”. Well, as shocking as this might seem to you, it is apparently not just me who is using this small app, but also another 8,935 users out there issuing an average of 0.22 Queries Per Second every day (19008 queries a day) for the past year, according to the stats provided by Flickr for the API key.

    Granted, it may be not a huge number compared to what other online services might be experiencing these days, but for me this is enough motivation to keep the little green frog working and running, thus worth updating it one more time. Also, I’d argue that these numbers for a niche app like this one (aimed at users of the Linux desktop that still use Flickr to upload pictures in 2017) do not even look too bad, although without more specific data backing this comment this is, of course, just my personal and highly-biased opinion.

    So, what’s new? Some small changes and fixes, along with other less visible modifications, but still relevant and necessary IMHO:

    • Fixed integration with GNOME Software (fixed a bug regarding appstream data).
    • Fixed errors loading images from certain cameras & phones, such as the OnePlus 5.
    • Cleaned the code by finally migrating to using g_auto, g_autoptr and g_autofree.
    • Migrated to the meson build system, and removed all the autotools files.
    • Big update to translations, now with more than 22 languages 90% – 100% translated.

    Also, this is the first release that happens after having a fully operational centralized place for Flatpak applications (aka Flathub), so I’ve updated the manifest and I’m happy to say that frogr 1.4 is already available for i386, arm, aarch64 and x86_64. You can install it either from GNOME Software (details on how to do it at https://flathub.org), or from the command line by just doing this:

    flatpak install --from https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

    Also worth mentioning that, starting with Frogr 1.4, I will no longer be updating my PPA at Launchpad. I did that in the past to make it possible for Ubuntu users to have access to the latest release ASAP, but now we have Flatpak that’s a much better way to install and run the latest stable release in any supported distro (not just Ubuntu). Thus, I’m dropping the extra work required to deal with the PPA and flat-out recommending users to use Flatpak or wait until their distro of choice packages the latest release.

    And I think this is everything. As usual, feel free to check the main website for extra information on how to get frogr and/or how to contribute to it. Feedback and/or help is more than welcome.

    Happy new year everyone!

    07 de December de 2017

    OSK update

    There’s been a rumor that I was working on improving gnome-shell on-screen keyboard, what’s been up here? Let me show you!

    The design has been based on the mockups at https://wiki.gnome.org/Design/OS/ScreenKeyboard, here’s how it looks in English (mind you, hasn’t gone through theming wizards):

    The keymaps get generated from CLDR (see here), which helped boost the number of supported scripts (c.f. caribou), some visual examples:

    As you can see there’s still a few ugly ones, the layouts aren’t as uniform as one might expect, these issues will be resolved over time.

    The additional supported scripts don’t mean much without a way to send those fancy chars/strings to the client. We traditionally were just able to send forged keyboard events, which means we were restricted to keycodes that had a representation in the current keymap. On X11 we are kind of stuck with that, but we can do better on Wayland, this work relies on a simplified version of the text input protocol that I’m doing the last proofreading before proposing as v3 (the branches currently use a private copy). Using an specific protocol allows for sending UTF8 strings independently of the keymap, very convenient too for text completion.

    But there are keymaps where CLDR doesn’t dare going, prominent examples are Chinese or Japanese. For those, I’m looking into properly leveraging IBus so pinyin-like input methods work by feeding the results into the suggestions box:

    Ni Hao!

    The suggestion box even kind of works with the typing booster ibus IM. But you have to explicitly activate it, there is room for improvement here in the future.

    And there is of course still bad stuff and todo items. Some languages like Korean neither have a layout, nor input methods that accept latin input, so they are badly handled (read: not at all). It would also be nice to support shape-based input.

    Other missing things from the mockups are the special numeric and emoji keymaps, there’s some unpushed work towards supporting those, but I had to draw the line somewhere!

    The work has been pushed in mutter, gtk+ and gnome-shell branches, which I hope will get timely polished and merged this cycle 🙂

    09 de September de 2017

    WebDriver support in WebKitGTK+ 2.18

    WebDriver is an automation API to control a web browser. It allows to create automated tests for web applications independently of the browser and platform. WebKitGTK+ 2.18, that will be released next week, includes an initial implementation of the WebDriver specification.

    WebDriver in WebKitGTK+

    There’s a new process (WebKitWebDriver) that works as the server, processing the clients requests to spawn and control the web browser. The WebKitGTK+ driver is not tied to any specific browser, it can be used with any WebKitGTK+ based browser, but it uses MiniBrowser as the default. The driver uses the same remote controlling protocol used by the remote inspector to communicate and control the web browser instance. The implementation is not complete yet, but it’s enough for what many users need.

    The clients

    The web application tests are the clients of the WebDriver server. The Selenium project provides APIs for different languages (Java, Python, Ruby, etc.) to write the tests. Python is the only language supported by WebKitGTK+ for now. It’s not yet upstream, but we hope it will be integrated soon. In the meantime you can use our fork in github. Let’s see an example to understand how it works and what we can do.

    from selenium import webdriver
    
    # Create a WebKitGTK driver instance. It spawns WebKitWebDriver 
    # process automatically that will launch MiniBrowser.
    wkgtk = webdriver.WebKitGTK()
    
    # Let's load the WebKitGTK+ website.
    wkgtk.get("https://www.webkitgtk.org")
    
    # Find the GNOME link.
    gnome = wkgtk.find_element_by_partial_link_text("GNOME")
    
    # Click on the link. 
    gnome.click()
    
    # Find the search form. 
    search = wkgtk.find_element_by_id("searchform")
    
    # Find the first input element in the search form.
    text_field = search.find_element_by_tag_name("input")
    
    # Type epiphany in the search field and submit.
    text_field.send_keys("epiphany")
    text_field.submit()
    
    # Let's count the links in the contents div to check we got results.
    contents = wkgtk.find_element_by_class_name("content")
    links = contents.find_elements_by_tag_name("a")
    assert len(links) > 0
    
    # Quit the driver. The session is closed so MiniBrowser 
    # will be closed and then WebKitWebDriver process finishes.
    wkgtk.quit()
    

    Note that this is just an example to show how to write a test and what kind of things you can do, there are better ways to achieve the same results, and it depends on the current source of public websites, so it might not work in the future.

    Web browsers / applications

    As I said before, WebKitWebDriver process supports any WebKitGTK+ based browser, but that doesn’t mean all browsers can automatically be controlled by automation (that would be scary). WebKitGTK+ 2.18 also provides new API for applications to support automation.

    • First of all the application has to explicitly enable automation using webkit_web_context_set_automation_allowed(). It’s important to know that the WebKitGTK+ API doesn’t allow to enable automation in several WebKitWebContexts at the same time. The driver will spawn the application when a new session is requested, so the application should enable automation at startup. It’s recommended that applications add a new command line option to enable automation, and only enable it when provided.
    • After launching the application the driver will request the browser to create a new automation session. The signal “automation-started” will be emitted in the context to notify the application that a new session has been created. If automation is not allowed in the context, the session won’t be created and the signal won’t be emitted either.
    • A WebKitAutomationSession object is passed as parameter to the “automation-started” signal. This can be used to provide information about the application (name and version) to the driver that will match them with what the client requires accepting or rejecting the session request.
    • The WebKitAutomationSession will emit the signal “create-web-view” every time the driver needs to create a new web view. The application can then create a new window or tab containing the new web view that should be returned by the signal. This signal will always be emitted even if the browser has already an initial web view open, in that case it’s recommened to return the existing empty web view.
    • Web views are also automation aware, similar to ephemeral web views, web views that allow automation should be created with the constructor property “is-controlled-by-automation” enabled.

    This is the new API that applications need to implement to support WebDriver, it’s designed to be as safe as possible, but there are many things that can’t be controlled by WebKitGTK+, so we have several recommendations for applications that want to support automation:

    • Add a way to enable automation in your application at startup, like a command line option, that is disabled by default. Never allow automation in a normal application instance.
    • Enabling automation is not the only thing the application should do, so add an automation mode to your application.
    • Add visual feedback when in automation mode, like changing the theme, the window title or whatever that makes clear that a window or instance of the application is controllable by automation.
    • Add a message to explain that the window is being controlled by automation and the user is not expected to use it.
    • Use ephemeral web views in automation mode.
    • Use a temporal user profile in application mode, do not allow automation to change the history, bookmarks, etc. of an existing user.
    • Do not load any homepage in automation mode, just keep an empty web view (about:blank) that can be used when a new web view is requested by automation.

    The WebKitGTK client driver

    Applications need to implement the new automation API to support WebDriver, but the WebKitWebDriver process doesn’t know how to launch the browsers. That information should be provided by the client using the WebKitGTKOptions object. The driver constructor can receive an instance of a WebKitGTKOptions object, with the browser information and other options. Let’s see how it works with an example to launch epiphany:

    from selenium import webdriver
    from selenium.webdriver import WebKitGTKOptions
    
    options = WebKitGTKOptions()
    options.browser_executable_path = "/usr/bin/epiphany"
    options.add_browser_argument("--automation-mode")
    epiphany = webdriver.WebKitGTK(browser_options=options)
    

    Again, this is just an example, Epiphany doesn’t even support WebDriver yet. Browsers or applications could create their own drivers on top of the WebKitGTK one to make it more convenient to use.

    from selenium import webdriver
    epiphany = webdriver.Epiphany()
    

    Plans

    During the next release cycle, we plan to do the following tasks:

    • Complete the implementation: add support for all commands in the spec and complete the ones that are partially supported now.
    • Add support for running the WPT WebDriver tests in the WebKit bots.
    • Add a WebKitGTK driver implementation for other languages in Selenium.
    • Add support for automation in Epiphany.
    • Add WebDriver support to WPE/dyz.

    22 de August de 2017

    Tracker requires SQLite >= 3.20 to be compiled with –enable-fts5

    Tracker is one of these pieces of software that get no special praise when things work, but you wake up to personal insults on bugzilla when they don’t, today is one of those days.

    Several distros have been eager to push SQLite 3.20.0 still hot from the oven to their users, apparently ignoring the API and ABI incompatibilities that are described in the changelog. These do hit Tracker, and are only made visible at runtime.

    Furthermore, there is further undocumented ABI breakage that makes FTS5 modules generated from pre/post 3.20.0 SQLite code backward and forward incompatible with the other versions. Tracker used to ship a copy of the FTS5 module, but this situation is not tenable anymore.

    The solution then? Making it mandatory that SQLite >= 3.20.0 must have FTS5 builtin. The just released Tracker 1.12.3 and 1.99.3 will error hard if that is not the case.

    I’ve just sent this mail to distributor-list:

    Hi all,
    
    Sqlite 3.20.0 broke API/ABI for Tracker purposes. The change described in point 3 at http://sqlite.org/releaselog/3_20_0.html is not only backwards incompatible, but also brings backwards and forwards incompatibilities with standalone FTS5 modules like Tracker ships [1], all of those are only visible at runtime [2].
    
    FTS5 modules generated from SQLite < 3.20.0 won't work with >= 3.20.0, and the other way around. Since it's not tenable to ship multiple FTS5 module copies for pre/post 3.20.0, Tracker shall now make it a hard requirement that SQLite is compiled with builtin FTS5 (--enable-fts5) if SQLite >= 3.20.0 is found. The current Tracker FTS5 module is kept for older SQLite versions.
    
    This change applies to Tracker >=1.12.3 and >=1.99.3. I don't know if any distro pushed SQLite 3.20.0 in combination with a Tracker that is older than that, but it will be just as broken, those would require additional patches from the tracker-1.12 branch besides the described change.
    
    Please handle this promptly, wherever there's sqlite 3.20.0 without builtin FTS5 and/or tracker <= 1.12.2/1.99.2, there's unhappy Tracker users.
    
    Cheers,
      Carlos
    
    [1] Generated as described in
    http://sqlite.org/fts5.html#building_a_loadable_extension
    [2] https://bugzilla.gnome.org/show_bug.cgi?id=785883
    

    So, if you see errors about “TrackerTokenizer” something, please contact your distro packagers. I’ll close further incoming bugs as NOTGNOME.

    04 de August de 2017

    Back from GUADEC

    After spending a few days in Manchester with other fellow GNOME hackers and colleagues from Endless, I’m finally back at my place in the sunny land of Surrey (England) and I thought it would be nice to write some sort of recap, so here it is:

    The Conference

    Getting ready for GUADECI arrived in Manchester on Thursday the 27th just on time to go to the pre-registration event where I met the rest of the gang and had some dinner, and that was already a great start. Let’s forget about the fact that I lost my badge even before leaving the place, which has to be some type of record (losing the badge before the conference starts, really?), but all in all it was great to meet old friends, as well as some new faces, that evening already.

    Then the 3 core days of GUADEC started. My first impression was that everything (including the accommodation at the university, which was awesome) was very well organized in general, and the venue make it for a perfect place to organize this type of event, so I was already impressed even before things started.

    I attended many talks and all of them were great, but if I had to pick my 5 favourite ones I think those would be the following ones, in no particular order:

    • The GNOME Way, by Allan: A very insightful and inspiring talk, made me think of why we do the things we do, and why it matters. It also kicked an interesting pub conversation with Allan later on and I learned a new word in English (“principled“), so believe me it was great.
    • Keynote: The Battle Over Our Technology, by Karen: I have no words to express how much I enjoyed this talk. Karen was very powerful on stage and the way she shared her experiences and connected them to why Free Software is important did leave a mark.
    • Mutter/gnome-shell state of the union, by Florian and Carlos: As a person who is getting increasingly involved with Endless’s fork of GNOME Shell, I found this one particularly interesting. Also, I found it rather funny at points, specially during “the NVIDIA slide”.
    • Continuous: Past, Present, and Future, by Emmanuele: Sometimes I talk to friends and it strikes me how quickly they dismiss things as CI/CD as “boring” or “not interesting”, which I couldn’t disagree more with. This is very important work and Emmanuele is kicking ass as the build sheriff, so his talk was very interesting to me too. Also, he’s got a nice cat.
    • The History of GNOME, by Jonathan: Truth to be told, Jonathan already did a rather similar talk internally in Endless a while ago, so it was not entirely new to me, but I enjoyed it a lot too because it brought so many memories to my head: starting with when I started with Linux (RedHat 5.2 + GNOME pre-release!), when I used GNOME 1.x at the University and then moved to GNOME 2.x later on… not to mention the funny anecdotes he mentioned (never imagined the phone ringing while sleeping could be a good thing). Perfectly timed for the 20th anniversary of GNOME indeed!

    As I said, I attended other talks too and all were great too, so I’d encourage you to check the schedule and watch the recordings once they are available online, you won’t regret it.

    Closing ceremony

    And the next GUADEC will be in… Almería!

    One thing that surprised me this time was that I didn’t do as much hacking during the conference as in other occasions. Rather than seeing it as a bad thing, I believe that’s a clear indicator of how interesting and engaging the talks were this year, which made it for a perfect return after missing 3 edition (yes, my last GUADEC was in 2013).

    All in all it was a wonderful experience, and I can thank and congratulate the local team and the volunteers who run the conference this year well enough, so here’s is a picture I took where you can see all the people standing up and clapping during the closing ceremony.

    Many thanks and congratulations for all the work done. Seriously.

    The Unconference

    After 3 days of conference, the second part started: “2 days and a bit” (I was leaving on Wednesday morning) of meeting people and hacking in a different venue, where we gathered to work on different topics, plus the occasional high-bandwith meeting in person.

    GUADEC unconferenceAs you might expect, my main interest this time was around GNOME Shell, which is my main duty in Endless right now. This means that, besides trying to be present in the relevant BoFs, I’ve spent quite some time participating of discussions that gathered both upstream contributors and people from different companies (e.g. Endless, Red Hat, Canonical).

    This was extremely helpful and useful for me since, now we have rebased our fork of GNOME Shell 3.22, we’re in a much better position to converge and contribute back to upstream in a more reasonable fashion, as well as to collaborate implementing new features that we already have in Endless but that didn’t make it to upstream yet.

    And talking about those features, I’d like to highlight two things:

    First, the discussion we held with both developers and designers to talk about the new improvements that are being considered for both the window picker and the apps view, where one of the ideas is to improve the apps view by (maybe) adding a new grid of favourite applications that the users could customize, change the order… and so forth.

    According to the designers this proposal was partially inspired by what we have in Endless, so you can imagine I would be quite happy to see such a plan move forward, as we could help with the coding side of things upstream while reducing our diff for future rebases. Thing is, this is a proposal for now so nothing is set in stone yet, but I will definitely be interested in following and participating of the relevant discussions regarding to this.

    Second, as my colleague Georges already vaguely mentioned in his blog post, we had an improvised meeting on Wednesday with one of the designers from Red Hat (Jakub Steiner), where we discussed about a very particular feature upstream has wanted to have for a while and which Endless implemented downstream: management of folders using DnD, right from the apps view.

    This is something that Endless has had in its desktop since the beginning of times, but the implementation relied in a downstream-specific version of folders that Endless OS implemented even before folders were available in the upstream GNOME Shell, so contributing that back would have been… “interesting”. But fortunately, we have now dropped that custom implementation of folders and embraced the upstream solution during the last rebase to 3.22, and we’re in a much better position now to contribute our solution upstream. Once this lands, you should be able to create, modify, remove and use folders without having to open GNOME Software at all, just by dragging and dropping apps on top of other apps and folders, pretty much in a similat fashion compared to how you would do it in a mobile OS these days.

    We’re still in an early stage for this, though. Our current solution in Endless is based on some assumptions and tools that will simply not be the case upstream, so we will have to work with both the designers and the upstream maintainers to make this happen over the next months. Thus, don’t expect anything to land for the next stable release yet, but simply know we’ll be working on it  and that should hopefully make it not too far in the future.

    The Rest

    This GUADEC has been a blast for me, and probably the best and my most favourite edition ever among all those I’ve attended since 2008. Reasons for such a strong statement are diverse, but I think I can mention a few that are clear to me:

    From a personal point of view, I never felt so engaged and part of the community as this time. I don’t know if that has something to do with my recent duties in Endless (e.g. flatpak, GNOME Shell) or with something less “tangible” but that’s the truth. Can’t state it well enough.

    From the perspective of Endless, the fact that 17 of us were there is something to be very excited and happy about, specially considering that I work remotely and only see 4 of my colleagues from the London area on a regular basis (i.e. one day a week). Being able to meet people I don’t regularly see as well as some new faces in person is always great, but having them all together “under the same ceilings” for 6 days was simply outstanding.

    GNOME 20th anniversary dinner

    GNOME 20th anniversary dinner

    Also, as it happened, this year was the celebration of the 20th anniversary of the GNOME project and so the whole thing was quite emotional too. Not to mention that Federico’s birthday happened during GUADEC, which was a more than nice… coincidence? :-) Ah! And we also had an incredible dinner on Saturday to celebrate that, couldn’t certainly be a better opportunity for me to attend this conference!

    Last, a nearly impossible thing happened: despite of the demanding schedule that an event like this imposes (and I’m including our daily visit to the pubs here too), I managed to go running every single day between 5km and 10km, which I believe is the first time it happened in my life. I definitely took my running gear with me to other conferences but this time was the only one I took it that seriously, and also the first time that I joined other fellow GNOME runners in the process, which was quite fun as well.

    Final words

    I couldn’t finish this extremely long post without a brief note to acknowledge and thank all the many people who made this possible this year: the GNOME Foundation and the amazing group of volunteers who helped organize it, the local team who did an outstanding job at all levels (venue, accomodation, events…), my employer Endless for sponsoring my attendance and, of course, all the people who attended the event and made it such an special GUADEC this year.

    Thank you all, and see you next year in Almería!

    Credit to Georges Stavracas

    04 de July de 2017

    Endless OS 3.2 released!

    We just released Endless OS 3.2 to the world after a lot of really hard work from everyone here at Endless, including many important changes and fixes that spread pretty much across the whole OS: from the guts and less visible parts of the core system (e.g. a newer Linux kernel, OSTree and Flatpak improvements, updated libraries…) to other more visible parts including a whole rebase of the GNOME components and applications (e.g. mutter, gnome-settings-daemon, nautilus…), newer and improved “Endless apps” and a completely revamped desktop environment.

    By the way, before I dive deeper into the rest of this post, I’d like to remind you thatEndless OS is a Operating System that you can download for free from our website, so please don’t hesitate to check it out if you want to try it by yourself. But now, even though I’d love to talk in detail about ALL the changes in this release, I’d like to talk specifically about what has kept me busy most of the time since around March: the full revamp of our desktop environment, that is, our particular version of GNOME Shell.

    Endless OS 3.2 as it looks in my laptop right now

    Endless OS 3.2 as it looks in my laptop right now

    If you’re already familiar with what Endless OS is and/or with the GNOME project, you might already know that Endless’s desktop is a forked and heavily modified version of GNOME Shell, but what you might not know is that it was specifically based on GNOME Shell 3.8.

    Yes, you read that right, no kidding: a now 4 years old version of GNOME Shell was alive and kicking underneath the thousands of downstream changes that we added on top of it during all that time to implement the desired user experience for our target users, as we iterated based on tons of user testing sessions, research, design visions… that this company has been working on right since its inception. That includes porting very visible things such as the “Endless button”, the user menu, the apps grid right on top of the desktop, the ability to drag’n’drop icons around to re-organize that grid and easily manage folders (by just dragging apps into/out-of folders), the integrated desktop search (+ additional search providers), the window picker mode… and many other things that are not visible at all, but that are required to deliver a tight and consistent experience to our users.

    Endless button showcasing the new "show desktop" functionality

    Endless button showcasing the new “show desktop” functionality

    Aggregated system indicators and the user menu

    Of course, this situation was not optimal and finally we decided we had found the right moment to tackle this situation in line with the 3.2 release, so I was tasked with leading the mission of “rebasing” our downstream changes on top of a newer shell (more specifically on top of GNOME Shell 3.22), which looked to me like a “hell of a task” when I started, but still I didn’t really hesitate much and gladly picked it up right away because I really did want to make our desktop experience even better, and this looked to me like a pretty good opportunity to do so.

    By the way, note that I say “rebasing” between quotes, and the reason is because the usual approach of taking your downstream patches on top of a certain version of an Open Source project and apply them on top of whatever newer version you want to update to didn’t really work here: the vast amount of changes combined with the fact that the code base has changed quite a bit between 3.8 and 3.22 made that strategy fairly complicated, so in the end we had to opt for a combination of rebasing some patches (when they were clean enough and still made sense) and a re-implementation of the desired functionality on top of the newer base.

    Integrated desktop search

    The integrated desktop search in action

    New implementation for folders in Endless OS (based on upstream’s)

    As you can imagine, and especially considering my fairly limited previous experience with things like mutter, clutter and the shell’s code, this proved to be a pretty difficult thing for me to take on if I’m truly honest. However, maybe it’s precisely because of all those things that, now that it’s released, I look at the result of all these months of hard work and I can’t help but feel very proud of what we achieved in this, pretty tight, time frame: we have a refreshed Endless OS desktop now with new functionality, better animations, better panels, better notifications, better folders (we ditched our own in favour of upstream’s), better infrastructure… better everything!.

    Sure, it’s not perfect yet (no such a thing as “finished software”, right?) and we will keep working hard for the next releases to fix known issues and make it even better, but what we have released today is IMHO a pretty solid 3.2 release that I feel very proud of, and one that is out there now already for everyone to see, use and enjoy, and that is quite an achievement.

    Removing and app by dragging and dropping it into the trash bin

    Now, you might have noticed I used “we” most of the time in this post when referring to the hard work that we did, and that’s because this was not something I did myself alone, not at all. While it’s still true I started working on this mostly on my own and that I probably took on most of the biggest tasks myself, the truth is that several other people jumped in to help with this monumental task tackling a fair amount of important tasks in parallel, and I’m pretty sure we couldn’t have released this by now if not because of the team effort we managed to pull here.

    I’m a bit afraid of forgetting to mention some people, but I’ll try anyway: many thanks to Cosimo Cecchi, Joaquim Rocha, Roddy Shuler, Georges Stavracas, Sam Spilsbury, Will Thomson, Simon Schampijer, Michael Catanzaro and of course the entire design team, who all joined me in this massive quest by taking some time alongside with their other responsibilities to help by tackling several tasks each, resulting on the shell being released on time.

    The window picker as activated from the hot corner (bottom – right)

    Last, before I finish this post, I’d just like to pre-answer a couple of questions that I guess some of you might have already:

    Will you be proposing some of this changes upstream?

    Our intention is to reduce the diff with upstream as much as possible, which is the reason we have left many things from upstream untouched in Endless OS 3.2 (e.g. the date/menu panel) and the reason why we already did some fairly big changes for 3.2 to get closer in other places we previously had our very own thing (e.g. folders), so be sure we will upstream everything we can as far as it’s possible and makes sense for upstream.

    Actually, we have already pushed many patches to the shell and related projects since Endless moved to GNOME Shell a few years ago, and I don’t see any reason why that would change.

    When will Endless OS desktop be rebased again on top of a newer GNOME Shell?

    If anything we learned from this “rebasing” experience is that we don’t want to go through it ever again, seriously :-). It made sense to be based on an old shell for some time while we were prototyping and developing our desktop based on our research, user testing sessions and so on, but we now have a fairly mature system and the current plan is to move on from this situation where we had changes on top of a 4 years old codebase, to a point where we’ll keep closer to upstream, with more frequent rebases from now on.

    Thus, the short answer to that question is that we plan to rebase the shell more frequently after this release, ideally two times a year so that we are never too far away from the latest GNOME Shell codebase.


    And I think that’s all. I’ve already written too much, so if you excuse me I’ll get back to my Emacs (yes, I’m still using Emacs!) and let you enjoy this video of a recent development snapshot of Endless OS 3.2, as created by my colleague Michael Hall a few days ago:


    (Feel free to visit our YouTube channel to check out for more videos like this one)

    Also, quick shameless plug just to remind you that we have an Endless Community website which you can join and use to provide feedback, ask questions or simply to keep informed about Endless. And if real time communication is your thing, we’re also on IRC (#endless on Freenode) and Slack, so I very much encourage you to join us via any of these channels as well if you want.

    Ah! And before I forget, just a quick note to mention that this year I’m going to GUADEC again after a big break (my last one was in Brno, in 2013) thanks to my company, which is sponsoring my attendance in several ways, so feel free to say “hi!” if you want to talk to me about Endless, the shell, life or anything else.

    11 de June de 2017

    Next Tracker is 2.0.0

    There’s a few plans in the boiler for Tracker:

    Splitting core from miners

    Tracker is usually deemed a “metadata indexer”, although that’s just half the truth. Even though Tracker could be essentially considered that in its very early days, it made a bold move back in 0.7.x to using Sparql as the interface to store and retrieve this metadata, where both the indexers and the applications using this metadata talk the same language.

    So in reality, the storage and query language are useful by themselves. As per the plans, you’ll now have to distinguish between:
    – Tracker the RDF store, and
    – Tracker miners, the infrastructure and binaries to translate a number of entities (be it files or whatever) into Sparql, using the first acceptation for storage

    Making standalone Sparql endpoints possible

    At the time of moving to Sparql, Tracker was conceived as a global store of deeply interconnected data, Nepomuk seemed the most obvious choice to represent the data for indexing purposes and client isolation was basically left to 3rd parties. However times change, sandboxing is very present, and Tracker’s global store don’t help with it.

    Based on initial work from Philip Van Hoof, quite some shuffling has been going on in wip/carlosg/domain-ontologies to make multiple Sparql endpoints (“database connections” in non-fancy speak) possible. This will allow applications to use private sparql endpoints, and possibly using other ontologies (“schemas” in non-fancy speak) than Nepomuk. The benefits are twofold, this will be a lot friendlier to sandboxing, and also increments the versatility of the Tracker core.

    Switching to Semantic versioning

    This change was suggested some time ago by Philip Van Hoof, and the idea has been growing in me. Tracker has usually had a longstanding backwards compatibility promises, not just in terms of API, but also in terms of data compatibility. However, bumping the version as per the GNOME schedule results in multiple stable versions being used out there, with the patch backporting and bug management overhead that it implies.

    In reality, what we want to tell people (and usually do!) is “use latest stable Tracker”, often the stuff is already fixed there, and there’s no reason why you would want to stick to a stable series that will receive limited improvements. I do hope that semantic versioning conveys Tracker’s “later is better” stance, with some optimism I see it standing on 2.y.z for a long time, and maybe even 2.0.z for tracker core.

    But versions are like age, just a number :). Tracker 2.x services and client side API will foreseeably be backwards compatible with 1.x from the reader standpoint. The Tracker core could be made parallel installable with 1.0, but I wouldn’t even bother with the high level infrastructure, 2.x will just be a better 1.x.

    But this doesn’t mean we jump the GNOME unstable cycle ship. IMHO, it’s still worth following to let newly introduced code bake in, it just won’t result in gratuitous version bumps if API changes/additions are not in sight.

    Code and API cleanup

    In the more active past, Tracker had a lot code accretion while trying to get the buy-in, this included multiple miners, extensions and tools. But it was never the goal of Tracker to be the alpha and omega of indexing in itself, rather to have applications update and blend the data for their own consumption. Fast forward a few years and the results are mixed, Tracker got an amount of success, although apps almost exclusively rely on data produced by Tracker’s own miners, while most of these extensions are bitrotting since much of the activity and manpower went away.

    Sam Thursfield started doing some nice cleanups of maemo/meego specific code (yes, we still had that) and making Tracker use Meson (which indirectly started tiptoeing some of that bitrot code). Several of these these extensions implementing Tracker support shall just go to the attic and should be done proper, it will at least be the case with nautilus thanks to Alexandru Pandelea’s GSOC work :).

    But the version bump and code split is too good of an opportunity to miss it :). Some deprecated/old API will also go away, probably none of which you’ve ever used, there will be some porting documentation anyway.

    20 de May de 2017

    Frogr 1.3 released

    Quick post to let you know that I just released frogr 1.3.

    This is mostly a small update to incorporate a bunch of updates in translations, a few changes aimed at improving the flatpak version of it (the desktop icon has been broken for a while until a few weeks ago) and to remove some deprecated calls in recent versions of GTK+.

    Ah! I’ve also officially dropped support for OS X via gtk-osx, as I was systematically failing to update and use (I only use frogr from GNOME these days) since a loooong time ago,  and so it did not make sense for me to keep pretending that the mac version is something that is usable and maintained anymore.

    As usual, you can go to the main website for extra information on how to get frogr and/or how to contribute to it. Any feedback or help is more than welcome!

     

    03 de May de 2017

    WebKitGTK+ remote debugging in 2.18

    WebKitGTK+ has supported remote debugging for a long time. The current implementation uses WebSockets for the communication between the local browser (the debugger) and the remote browser (the debug target or debuggable). This implementation was very simple and, in theory, you could use any web browser as the debugger because all inspector code was served by the WebSockets. I said in theory because in the practice this was not always so easy, since the inspector code uses newer JavaScript features that are not implemented in other browsers yet. The other major issue of this approach was that the communication between debugger and target was not bi-directional, so the target browser couldn’t notify the debugger about changes (like a new tab open, navigation or that is going to be closed).

    Apple abandoned the WebSockets approach a long time ago and implemented its own remote inspector, using XPC for the communication between debugger and target. They also moved the remote inspector handling to JavaScriptCore making it available to debug JavaScript applications without a WebView too. In addition, the remote inspector is also used by Apple to implement WebDriver. We think that this approach has a lot more advantages than disadvantages compared to the WebSockets solution, so we have been working on making it possible to use this new remote inspector in the GTK+ port too. After some refactorings to the code to separate the cross-platform implementation from the Apple one, we could add our implementation on top of that. This implementation is already available in WebKitGTK+ 2.17.1, the first unstable release of this cycle.

    From the user point of view there aren’t many differences, with the WebSockets we launched the target browser this way:

    $ WEBKIT_INSPECTOR_SERVER=127.0.0.1:1234 browser
    

    This hasn’t changed with the new remote inspector. To start debugging we opened any browser and loaded

    http://127.0.0.1:1234

    With the new remote inspector we have to use any WebKitGTK+ based browser and load

    inspector://127.0.0.1:1234

    As you have already noticed, it’s no longer possible to use any web browser, you need to use a recent enough WebKitGTK+ based browser as the debugger. This is because of the way the new remote inspector works. It requires a frontend implementation that knows how to communicate with the targets. In the case of Apple that frontend implementation is Safari itself, which has a menu with the list of remote debuggable targets. In WebKitGTK+ we didn’t want to force using a particular web browser as debugger, so the frontend is implemented as a builtin custom protocol of WebKitGTK+. So, loading inspector:// URLs in any WebKitGTK+ WebView will show the remote inspector page with the list of debuggable targets.

    It looks quite similar to what we had, just a list of debuggable targets, but there are a few differences:

    • A new debugger window is opened when inspector button is clicked instead of reusing the same web view. Clicking on inspect again just brings the window to the front.
    • The debugger window loads faster, because the inspector code is not served by HTTP, but locally loaded like the normal local inspector.
    • The target list page is updated automatically, without having to manually reload it when a target is added, removed or modified.
    • The debugger window is automatically closed when the target web view is closed or crashed.

    How does the new remote inspector work?

    The web browser checks the presence of WEBKIT_INSPECTOR_SERVER environment variable at start up, the same way it was done with the WebSockets. If present, the RemoteInspectorServer is started in the UI process running a DBus service listening in the IP and port provided. The environment variable is propagated to the child web processes, that create a RemoteInspector object and connect to the RemoteInspectorServer. There’s one RemoteInspector per web process, and one debuggable target per WebView. Every RemoteInspector maintains a list of debuggable targets that is sent to the RemoteInspector server when a new target is added, removed or modified, or when explicitly requested by the RemoteInspectorServer.
    When the debugger browser loads an inspector:// URL, a RemoteInspectorClient is created. The RemoteInspectorClient connects to the RemoteInspectorServer using the IP and port of the inspector:// URL and asks for the list of targets that is used by the custom protocol handler to create the web page. The RemoteInspectorServer works as a router, forwarding messages between RemoteInspector and RemoteInspectorClient objects.

    20 de March de 2017

    WebKitGTK+ 2.16

    The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

    Memory consumption

    After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

    CSS Grid Layout

    Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.

    New API

    The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

    Hardware acceleration policy

    Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

    Network proxy settings

    Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

    Private browsing

    WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

    Website data

    WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

    Dynamically added forms

    Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

    Custom print settings

    The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

    Notification improvements

    Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

    Debugging tools

    Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

    Memory sampler

    This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

    $ WEBKIT_SAMPLE_MEMORY=1 MiniBrowser 
    Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
    Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036
    

    The files contain a list of sample reports like this one:

    Timestamp                          1490004807
    Total Program Bytes                1960214528
    Resident Set Bytes                 84127744
    Resident Shared Bytes              68661248
    Text Bytes                         4096
    Library Bytes                      0
    Data + Stack Bytes                 87068672
    Dirty Bytes                        0
    Fast Malloc In Use                 86466560
    Fast Malloc Committed Memory       86466560
    JavaScript Heap In Use             0
    JavaScript Heap Committed Memory   49152
    JavaScript Stack Bytes             2472
    JavaScript JIT Bytes               8192
    Total Memory In Use                86477224
    Total Committed Memory             86526376
    System Total Bytes                 16729788416
    Available Bytes                    5788946432
    Shared Bytes                       1037447168
    Buffer Bytes                       844214272
    Total Swap Bytes                   1996484608
    Available Swap Bytes               1991532544
    

    Resource usage overlay

    The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

    We plan to add more information to the overlay in the future like memory cache status.