Planeta GNOME Hispano
La actividad Hispana de GNOME 24 x 7

21 de March de 2018

Updated Chromium Legacy Wayland Support


Future Ozone Wayland backend is still not ready for shipping. So we are announcing the release of an updated Ozone Wayland backend for Chromium, based on the implementation provided by Intel. It is rebased on top of latest stable Chromium release and you can find it in my team Github. Hope you will appreciate it.

Official Chromium on Linux desktop nowadays

Linux desktop is progressively migrating to use Wayland as the display server. It is the default option in Fedora, Ubuntu ~~and, more importantly, the next Ubuntu Long Term Support release will ship Gnome Shell Wayland display server by default~~ (P.S. since this post was originally written, Ubuntu has delayed the Wayland adoption for LTS).

As is, now, Chromium browser for Linux desktop support is based on X11. This means it will natively interact with an X server and with its XDG extensions for displaying the contents and receiving user events. But, as said, next generation of Linux desktop will be using Wayland display servers instead of X11. How is it working? Using XWayland server, a full X11 server built on top of Wayland protocol. Ok, but that has an impact on performance. Chromium needs to communicate and paint to X11 provided buffers, and then, those buffers need to be shared with Wayland display server. And the user events will need to be proxied from the Wayland display server through the XWayland server and X11 protocol. It requires more resources: more memory, CPU, and GPU. And it adds more latency to the communication.


Chromium supports officially several platforms (Windows, Android, Linux desktop, iOS). But it provides abstractions for porting it to other platforms.

The set of abstractions is named Ozone (more info here). It allows to implement one or more platform components with the hooks for properly integrating with a platform that is in the set of officially supported targets. Among other things it provides abstractions for:
* Obtaining accelerated surfaces.
* Creating and obtaining windows to paint the contents.
* Interacting with the desktop cursor.
* Receiving user events.
* Interacting with the window manager.

Chromium and Wayland (2014-2016)

Even if Wayland was not used on Linux desktop, a bunch of embedded devices have been using Wayland for their display server for quite some time. LG has been shipping a full Wayland experience on the webOS TV products.

In the last 4 years, Intel has been providing an implementation of Ozone abstractions for Wayland. It was an amazing work that allowed running Chromium browser on top of a Wayland compositor. This backend has been the de facto standard for running Chromium browser on all these Wayland-enabled embedded devices.

But the development of this implementation has mostly stopped around Chromium 49 (though rebases on top of Chromium 51 and 53 have been provided).

Chromium and Wayland (2018+)

Since the end of 2016, Igalia has been involved on several initiatives to allow Chromium to run natively in Wayland. Even if this work is based on the original Ozone Wayland backend by Intel, it is mostly a rewrite and adaptation to the future graphics architecture in Chromium (Viz and Mus).

This is being developed in the Igalia GitHub, downstream, though it is expected to be landed upstream progressively. Hopefully, at some point in 2018, this new backend will be fully ready for shipping products with it. But we are still not there. ~~Some major missing parts are Wayland TextInput protocol and content shell support~~ (P.S. since this was written, both TextInput and content shell support are working now!).

More information on these posts from the authors:
* June 2016: Understanding Chromium’s runtime ozone platform selection (by Antonio Gomes).
* October 2016: Analysis of Ozone Wayland (by Frédéric Wang).
* November 2016: Chromium, ozone, wayland and beyond (by Antonio Gomes).
* December 2016: Chromium on R-Car M3 & AGL/Wayland (by Frédéric Wang).
* February 2017: Mus Window System (by Frédéric Wang).
* May 2017: Chromium Mus/Ozone update (H1/2017): wayland, x11 (by Antonio Gomes).
* June 2017: Running Chromium m60 on R-Car M3 board & AGL/Wayland (by Maksim Sisov).

Releasing legacy Ozone Wayland backend (2017-2018)

Ok, so new Wayland backend is still not ready in some cases, and the old one is unmaintained. For that reason, LG is announcing the release of an updated legacy Ozone Wayland backend. It is essentially the original Intel backend, but ported to current Chromium stable.

Why? Because we want to provide a migration path to the future Ozone Wayland backend. And because we want to share this effort with other developers, willing to run Chromium in Wayland immediately, or that are still using the old backend and cannot immediately migrate to the new one.

WARNING If you are starting development for a product that is going to happen in 1-2 years… Very likely your best option is already migrating now to the new Ozone Wayland backend (and help with the missing bits). We will stop maintaining it ourselves once new Ozone Wayland backend lands upstream and covers all our needs.

What does this port include?
* Rebased on top of Chromium m60, m61, m62 and m63.
* Ported to GN.
* It already includes some changes to adapt to the new Ozone Wayland refactors.

It is hosted at

Enjoy it!

Originally published at webOS Open Source Edition Blog. and licensed under Creative Commons Attribution 4.0.

19 de March de 2018

IEEE Xplore, India y plagios en artículos académicos

En la lectura diaria de artículos me encuentro últimamente con algunas piezas, disponibles en la web de IEEE Xplore, donde me llevo sorpresas desagradables. El de hoy me ha hecho especial “gracia”. Se trata de “A Novel Approach for Medical Assistance Using Trained Chatbot” de la conferencia
“2017 International Conference on Inventive Communication and Computational Technologies (ICICCT)”, y disponible, como digo, en IEEE Xplore.

Son cinco autores del mismo departamento: Department of Computer Science & Engineering, Muthoot Institute of Technology and Science-Varikoli, una entidad educativa de la India.

Me llamó la atención que ninguno de los autores usara un email de la institución a la que pertenecen. Todos ellos son direcciones Gmail.

Por otro lado, hay al menos un par de errores gramaticales de bulto en el abstract. Son “bad smells”, heurísticos que indican que lo peor está por llegar.

En efecto, el artículo está plagado de más errores de ortografía y gramática, pero sobre todo, de “olvidos” a la hora de citar. Sí, es una forma elegante de decir plagios. Me dí cuenta del mismo al ir leyendo los párrafos. Todos ellos de pésima calidad, salvo un par, con un inglés perfecto, pulcro. Por supuesto, sin citar, ni refrasear, ni entrecomillar, ni gaitas. Copia literal sin citar al autor. Con un par.

Una búsqueda en Google me llevó a la fuente original:

“Content is king, so don’t distract your user with fancy but redundant features. Also, simplicity is what helped the most successful brands win our hearts. These things are the core of a Chatbot concept that’s why they are doomed for success.”

Aunque buscando un poco más, me he llevado una sopresa extra. Por un lado, ese artículo “parece” el original. Pero según Google, fue publicado el 10 de noviembre de 2016. Y otra vez según Google, ese mismo párrafo fue publicado por esta otra web el 12 de junio de 2016:
[Quiz] 9 Reasons to Build a ChatBot Now – Letzgro

Algo me decía que eso no podía ser… La web de huele a web depredadora, con un estilo gráfico bastante malo. Por otro lado, la web de Chatbots Journal, tiene un estilo muy cuidado y en ella escriben en exclusiva artículos relacionados con el tema que nos concierne. ¿Qué está ocurriendo? Pues no sé cómo lo han hecho en Letzgro, pero según Web Archive, esa página no es de junio de 2016 sino que apareció en 2017.

PD: al paper plagiador se le fue de las manos. En la siguiente frase leo: “You may read about these two in more detail in some of our other blog posts”. Yeah… blog posts. Lo peor: IEEE Xplore pide 31$ por leerlo 🙁

04 de February de 2018

How to access the value of an array key in a Javascript Map object?

I wrote this message in StackOverflow and just before clicking the Send button I stumbled upon the solution. I don’t want to lose it, and I don’t have time today to publish the solution, so I decided to do it in two parts. First, publish the question and later (tomorrow?) publish the solution. Here we go!

I can set() a value using an array as a key in a JS Map, but it seems that there isn’t a clean interface to access that same value using the symmetric get() method:

let z = new Map {}
    z.set([1,2], "a"); // Works as expected: Map { [ 1, 2 ] => 'a' }
    z.get([1,2]); // undefined (!)

I suspect that this behaviour has something to do with the fact that in JS:

[1,2] == [1,2] 

I can use [1,2].toString() as a key and then the z.get([1,2].toString()) method works as expected, but I’m wondering if there is in any other “cleaner” way to code that.

Well, Map objects are a new addition of ES6. And it seems that ES6 has a problem with Maps if you are trying to use them with object keys. This has been discussed here and here .

A solution that WorksForMe was proposed in that same StackOverflow thread, using an ad-hoc built HashMap class, that takes a hash function as parameter to properly store and use object keys (specifically, for my problem, array keys)

function HashMap(hash) {
  var map = new Map;
  var _set = map.set;
  var _get = map.get;
  var _has = map.has;
  var _delete = map.delete;
  map.set = function (k,v) {
    return, hash(k), v);
  map.get = function (k) {
    return, hash(k));
  map.has = function (k) {
    return, hash(k));
  map.delete = function (k) {
    return, hash(k));
  return map;

I have used it as follows (note that JSON.stringify is NOT a hash function, but as I said, it works for my example because I certainly know that my array values are not going to have duplicates). I should think about a proper hash function or use something from here, but as I said, I’m lazy today 🙂

let z = new HashMap(JSON.stringify);
  z.set([1,2], "a");
  z.get([1,2]); // "a"

23 de January de 2018

I'm going to FOSDEM 2018

I'm going to FOSDEM

Yeah. I finally decided I’m going to FOSDEM this year. 2018 is the year I’m re-taken my life as I like it and a right way to start it is meeting all those friends and colleagues I missed in those years of exile. I plan to attend to the beer event as soon I arrive to Brussels.

If you want to talk to me about GUADEC 2018, Fedora Flock 2018 or whatever please reach me by Twitter (@olea) or Telegram (@IsmaelOlea).

BTW, there are a couple relevant Telegram Groups FOSDEM related:

General English Telegram group:

FOSDEM Telegram Group

Spanish spoken one:

FOSDEM grupo Telegram en español

PS:A funny thing about FOSDEM is… this is the place when the Spaniards (or Madrileños indeed) opensource entusiasts can meet at once a year… in Brussels!

22 de January de 2018

2018 y la decepción

Cosas que pasan. Estás haciendo limpieza en tus carpetas para eliminar morrallas y archivar contenidos y encuentro el borrador de una entrada para mi blog que se quedó olvidada de publicar. Está fechada exactamente el 9 de marzo de 2014:

Un mes de marzo caliente en Almería

HackLab Almería

Menudo mes de marzo en Almería. Empezamos en febrero con las X Jornadas SLCENT del IES Al-Ándalus. Luego las Jornadas de Informática de la Universidad de Almería con más de diez conferencias. Hoy ha sido el torneo provincial de la First Lego League y han competido 14 equipos de la provincia. Y entre los eventos por llegar: el siguiente encuentro del Taller de Hardware Abierto, ElHackatón, el próximo BetaBeers y, tachán, el Día Arduino+Seminario Impresoras 3D. ¡Qué barbaridad!

Es una oportunidad para reflexionar: ¿es una casualidad anómala, aun cuando feliz? Hasta ahora en estas tierras no hemos estado muy acostumbrados a tanto movimiento. El caso es que sólo con una base social de gran preparación y la afortunada existencia de algunas personas con iniciativa se puede hacer visible el esfuerzo de gente con la motivación suficiente de usar su ocio para aprender y crecer en conocimiento y experiencia. Hay base, sólo hay que hacerla valer.

En cambio, hoy no puedo estar más desmoralizado al respecto. No me apetece entrar en las razones. Simplemente me ha apetecido reflejar el choque emocional del descubrimiento.

Opensource gratitude

Some weeks ago I’ve read somewhere in Twitter about how good will be to adopt and share the practice of thanking the opensource developers of the tools you use and love. Don’t remember neither who or where, and probably I’m stealing the method s/he proposed. Personally I’m getting used myself to visiting the project development site, and if not better method is available, to open and issue with a text like this:

Im opening this issue just to thankyou for the tool you wrote. It’s nice, useful and saves a lot of my time.


PS: please don’t close the issue so other persons could vote it to exprese their gratitude too.

As an example I’ve just wrote it for the CuteMarkEd editor:

CuteMarkEd gratitude screenshot

Hope this bring a litte bit of endorphines dose to those people who, with their effort, are building the infraestructure of the digital society. Think about it.

15 de January de 2018

Galerías de fotografías del museo almeriense de retroinformática

almacén de nuestra colección de retroinformática

calculadora En mi recuperación de contenidos y referencias pasadas recopilo estos enlaces relacionados con la asociación Museo almeriense de retroinformática, un proyecto que fundamos tres amigos de toda la vida el 6 de enero de 2004 y que desde entonces ha permanecido en un estado casi catatónico pero que al menos ha seguido sirviendo al propósito de conservar material informático, electrónico y últimamente y por extensión, eléctrico, de cálculo y otras máquinas, que valoramos un poco sentimentalmente pero con afán objetivo por significación, impacto tecnológico, diseño industrial y valor historiográfico. Y sí, los expertos en museografía lo primero que criticarán es el nombre: efectivamente ahora sabemos que lo que mantenemos es sólo una colección y no un museo. El museo sigue siendo una aspiración pero asentar la colección ya es un objetivo serio y suficientemente complicado. Espero que le demos más cariño en el futuro.

osciloscopio casco de realidad virtual, años 90

Exposición en las Jornadas SLCENT

Exposición organizada en las XI Jornadas SLCENT de Informática y Electrónica (noviembre de 2014) que anualmente organiza el I.E.S. Al-Ándalus.

Galería de fotografías realizada por Ana Mora:


Galería de fotografías realizada por Paco Cantón:

Galería de fotografías de Paco Cantón

Aparición en Canal Sur Noticias

Brevísima aparición en la edición almeriense de Canal Sur Noticias para explicar el contexto de las amenazas en la ciberseguridad actuales.

Gracias a la redacción de Canal Sur por su confianza.

10 de January de 2018

Normativas relacionadas con software libre en la Junta de Andalucía

Con la excusa de una conversación en el foro del HackLab Almería he refrescado información sobre los acuerdos políticos de implantación de software libre en la comunidad autonómica de Andalucía. Esta entrada es sólo una compilación de lo recogido en aquel hilo.

DECRETO 72/2003, de 18 de marzo, de Medidas de Impulso de la Sociedad del Conocimiento en Andalucía


Artículo 11. Materiales educativos en soporte informático.

  1. Se dotará a los centros docentes públicos de materiales y programas educativos en soporte informático, basados preferentemente en software libre. En todo caso, recibirán en dicho soporte todo el material educativo que elabore la Administración de la Junta de Andalucía.

  2. Asimismo, se incentivará entre el profesorado la producción de programas y materiales curriculares en soporte informático o para su utilización en Internet, especialmente aquellos desarrollos que se realicen mediante software libre.


Artículo 31. Software libre.

En las adquisiciones de equipamiento informático destinado a los centros docentes públicos para su uso en actividades educativas, se exigirá que todo el hardware sea compatible con sistemas operativos basados en software libre. Los ordenadores tendrán preinstalado todo el software libre necesario para el uso específico al que estén destinados.

El equipamiento informático que la Administración de la Junta de Andalucía ponga a disposición en los centros de acceso público a Internet utilizará para su funcionamiento productos de software libre.

La Administración de la Junta de Andalucía fomentará la difusión y utilización orientadas al uso personal, doméstico y educativo de software libre debidamente garantizado. A tal fin se establecerá un servicio de asesoramiento a través de Internet para la instalación y uso de este tipo de productos.

ORDEN de 21 de febrero de 2005, sobre disponibilidad pública de los programas informáticos de la Administración de la Junta de Andalucía y de sus Organismos Autónomos


En su virtud, de conformidad con lo dispuesto en el artículo 44.4 de la Ley 6/1983, de 21 de julio, del Gobierno y la Administración de la Comunidad Autónoma,


Artículo 1. Objeto.

El objeto de la presente Orden es poner a disposición pública el código fuente de los programas y aplicaciones informáticas y la documentación asociada a los mismos que sean propiedad de la Administración de la Junta de Andalucía y de sus Organismos Autónomos, que tendrán el carácter de software libre, así como establecer las condiciones para su libre uso y distribución.

Artículo 2. Definición de software libre.

1) A los efectos de la presente Orden, se entenderá por software libre aquellos programas, aplicaciones informáticas y documentación asociada a los mismos que reúnan los siguientes requisitos:

a) Posibilidad de ser leídos y/o ejecutados para cualquier finalidad, sin restricciones.

b) Posibilidad de ser copiados y modificados libremente.

c) Posibilidad de libre difusión de sus copias y versiones modificadas.

2) Tendrán la consideración de software libre tanto el producido por el personal al servicio de la Administración de la Junta de Andalucía o de sus Organismos Autónomos en el ejercicio de sus funciones como el elaborado a medida para ellos en virtud de cualquier modalidad de contrato suscrito con terceros.

Artículo 3. Condiciones de uso.

  1. Para el cumplimiento de los requisitos establecidos en el artículo anterior el software estará disponible en forma de código fuente y la documentación en un formato abierto que permita su modificación.

  2. La difusión y uso de copias, modificadas o no, sólo estará autorizada si se mantienen las condiciones de disponibilidad establecidas en los artículos 2 y 3 de la presente Orden.

28 de December de 2017

Frogr 1.4 released

Another year goes by and, again, I feel the call to make one more release just before 2017 over, so here we are: frogr 1.4 is out!

Screenshot of frogr 1.4

Yes, I know what you’re thinking: “Who uses Flickr in 2017 anyway?”. Well, as shocking as this might seem to you, it is apparently not just me who is using this small app, but also another 8,935 users out there issuing an average of 0.22 Queries Per Second every day (19008 queries a day) for the past year, according to the stats provided by Flickr for the API key.

Granted, it may be not a huge number compared to what other online services might be experiencing these days, but for me this is enough motivation to keep the little green frog working and running, thus worth updating it one more time. Also, I’d argue that these numbers for a niche app like this one (aimed at users of the Linux desktop that still use Flickr to upload pictures in 2017) do not even look too bad, although without more specific data backing this comment this is, of course, just my personal and highly-biased opinion.

So, what’s new? Some small changes and fixes, along with other less visible modifications, but still relevant and necessary IMHO:

  • Fixed integration with GNOME Software (fixed a bug regarding appstream data).
  • Fixed errors loading images from certain cameras & phones, such as the OnePlus 5.
  • Cleaned the code by finally migrating to using g_auto, g_autoptr and g_autofree.
  • Migrated to the meson build system, and removed all the autotools files.
  • Big update to translations, now with more than 22 languages 90% – 100% translated.

Also, this is the first release that happens after having a fully operational centralized place for Flatpak applications (aka Flathub), so I’ve updated the manifest and I’m happy to say that frogr 1.4 is already available for i386, arm, aarch64 and x86_64. You can install it either from GNOME Software (details on how to do it at, or from the command line by just doing this:

flatpak install --from

Also worth mentioning that, starting with Frogr 1.4, I will no longer be updating my PPA at Launchpad. I did that in the past to make it possible for Ubuntu users to have access to the latest release ASAP, but now we have Flatpak that’s a much better way to install and run the latest stable release in any supported distro (not just Ubuntu). Thus, I’m dropping the extra work required to deal with the PPA and flat-out recommending users to use Flatpak or wait until their distro of choice packages the latest release.

And I think this is everything. As usual, feel free to check the main website for extra information on how to get frogr and/or how to contribute to it. Feedback and/or help is more than welcome.

Happy new year everyone!

26 de December de 2017

Adding tags to my jekyll website

This iteration of the website uses the Jekyll static website generator. From time to time I add some features to the configuration. This time I wanted to add tags support to my posts. After a fast search I found jekyll-tagging. To put it working has been relatively easy because if you are not into Ruby you can misconfigure the gem dependencies as me. And to add some value to this post I’m just sharing some tips I added not written in the project readme file.

First: added a /tag/ page with the cloud of used tags in the form of a tag/index.html file with this content:

layout: page
permalink: /tag/

<div class="tag-cloud" id="tag-cloud">
  <a href="/tag/%40firma/" class="set-1">@firma</a> <a href="/tag/akademy/" class="set-1">Akademy</a> <a href="/tag/alepo/" class="set-1">Alepo</a> <a href="/tag/almeria/" class="set-2">Almería</a> <a href="/tag/andalucia/" class="set-1">Andalucía</a> <a href="/tag/android/" class="set-1">Android</a> <a href="/tag/barcelona/" class="set-1">Barcelona</a> <a href="/tag/bolivia/" class="set-1">Bolivia</a> <a href="/tag/cacert/" class="set-1">CAcert</a> <a href="/tag/canarias/" class="set-1">Canarias</a> <a href="/tag/centos/" class="set-1">CentOS</a> <a href="/tag/ceres/" class="set-1">Ceres</a> <a href="/tag/chronojump/" class="set-1">ChronoJump</a> <a href="/tag/cuba/" class="set-1">Cuba</a> <a href="/tag/cubaconf/" class="set-1">CubaConf</a> <a href="/tag/epf/" class="set-1">EPF</a> <a href="/tag/fnmt/" class="set-1">FNMT</a> <a href="/tag/fosdem/" class="set-1">FOSDEM</a> <a href="/tag/fudcon/" class="set-1">FUDCon</a> <a href="/tag/factura-e/" class="set-1">Factura-e</a> <a href="/tag/fedora/" class="set-3">Fedora</a> <a href="/tag/flock/" class="set-1">Flock</a> <a href="/tag/fuerteventura/" class="set-1">Fuerteventura</a> <a href="/tag/gdg/" class="set-1">GDG</a> <a href="/tag/gnome/" class="set-1">GNOME</a> <a href="/tag/gnome-hispano/" class="set-1">GNOME-Hispano</a> <a href="/tag/guadec/" class="set-1">GUADEC</a> <a href="/tag/galicia/" class="set-1">Galicia</a> <a href="/tag/geocamp/" class="set-1">GeoCamp</a> <a href="/tag/google/" class="set-1">Google</a> <a href="/tag/guademy/" class="set-1">Guademy</a> <a href="/tag/hacklab_almeria/" class="set-1">HackLab_Almería</a> <a href="/tag/hispalinux/" class="set-1">Hispalinux</a> <a href="/tag/ia/" class="set-1">IA</a> <a href="/tag/ibm/" class="set-1">IBM</a> <a href="/tag/kde/" class="set-1">KDE</a> <a href="/tag/kompozer/" class="set-1">Kompozer</a> <a href="/tag/l10n/" class="set-1">L10N</a> <a href="/tag/la_coruna/" class="set-1">La_Coruña</a> <a href="/tag/la_paz/" class="set-1">La_Paz</a> <a href="/tag/la_rioja/" class="set-1">La_Rioja</a> <a href="/tag/linuxtag/" class="set-1">LinuxTag</a> <a href="/tag/lucas/" class="set-1">LuCAS</a> <a href="/tag/lugo/" class="set-1">Lugo</a> <a href="/tag/mdd/" class="set-1">MDD</a> <a href="/tag/madrid/" class="set-1">Madrid</a> <a href="/tag/microsoft/" class="set-1">Microsoft</a> <a href="/tag/mono/" class="set-1">Mono</a> <a href="/tag/mexico/" class="set-1">México</a> <a href="/tag/nueva_york/" class="set-1">Nueva_York</a> <a href="/tag/ocsp/" class="set-1">OCSP</a> <a href="/tag/odf/" class="set-1">ODF</a> <a href="/tag/osl_unia/" class="set-1">OSL_UNIA</a> <a href="/tag/" class="set-1"></a> <a href="/tag/oswc/" class="set-1">OSWC</a> <a href="/tag/omegat/" class="set-1">OmegaT</a> <a href="/tag/openid/" class="set-1">OpenID</a> <a href="/tag/openmind/" class="set-1">Openmind</a> <a href="/tag/pycones/" class="set-1">PyConES</a> <a href="/tag/renfe/" class="set-1">Renfe</a> <a href="/tag/scfloss/" class="set-1">SCFLOSS</a> <a href="/tag/soos/" class="set-2">SOOS</a> <a href="/tag/ssl/" class="set-1">SSL</a> <a href="/tag/sonic_pi/" class="set-1">Sonic_Pi</a> <a href="/tag/supersec/" class="set-1">SuperSEC</a> <a href="/tag/superlopez/" class="set-1">Superlópez</a> <a href="/tag/tldp-es/" class="set-1">TLDP-ES</a> <a href="/tag/ue/" class="set-1">UE</a> <a href="/tag/vpn/" class="set-1">VPN</a> <a href="/tag/valencia/" class="set-1">Valencia</a> <a href="/tag/x509/" class="set-1">X509</a> <a href="/tag/yorokobu/" class="set-1">Yorokobu</a> <a href="/tag/zaragoza/" class="set-1">Zaragoza</a> <a href="/tag/admninistracion_publica/" class="set-1">admninistración_pública</a> <a href="/tag/anotaciones/" class="set-1">anotaciones</a> <a href="/tag/calidad/" class="set-1">calidad</a> <a href="/tag/ciencia_abierta/" class="set-1">ciencia_abierta</a> <a href="/tag/conferencia/" class="set-3">conferencia</a> <a href="/tag/congreso/" class="set-2">congreso</a> <a href="/tag/correo-e/" class="set-1">correo-e</a> <a href="/tag/cultura/" class="set-1">cultura</a> <a href="/tag/docker/" class="set-1">docker</a> <a href="/tag/ensayo/" class="set-1">ensayo</a> <a href="/tag/entrevista/" class="set-1">entrevista</a> <a href="/tag/filosofia/" class="set-1">filosofía</a> <a href="/tag/flatpak/" class="set-1">flatpak</a> <a href="/tag/fpga_wars/" class="set-1">fpga_wars</a> <a href="/tag/git/" class="set-1">git</a> <a href="/tag/gvsig/" class="set-1">gvSIG</a> <a href="/tag/hardware/" class="set-1">hardware</a> <a href="/tag/historia/" class="set-1">historia</a> <a href="/tag/innovacion/" class="set-1">innovación</a> <a href="/tag/interoperabilidad/" class="set-1">interoperabilidad</a> <a href="/tag/jekyll/" class="set-1">jekyll</a> <a href="/tag/laptop/" class="set-1">laptop</a> <a href="/tag/legislacion/" class="set-1">legislación</a> <a href="/tag/lingueisticos/" class="set-1">lingüísticos</a> <a href="/tag/linux/" class="set-1">linux</a> <a href="/tag/micro-educacion/" class="set-1">micro-educación</a> <a href="/tag/migas/" class="set-1">migas</a> <a href="/tag/museo/" class="set-1">museo</a> <a href="/tag/node.js/" class="set-1">node.js</a> <a href="/tag/normativa/" class="set-1">normativa</a> <a href="/tag/opensource/" class="set-5">opensource</a> <a href="/tag/p2p/" class="set-1">p2p</a> <a href="/tag/politica/" class="set-1">política</a> <a href="/tag/prensa/" class="set-1">prensa</a> <a href="/tag/procomunes/" class="set-1">procomunes</a> <a href="/tag/propiedad_intelectual/" class="set-1">propiedad_intelectual</a> <a href="/tag/publicacion/" class="set-2">publicación</a> <a href="/tag/recursos/" class="set-1">recursos</a> <a href="/tag/retroinformatica/" class="set-1">retroinformática</a> <a href="/tag/revolucion_digital/" class="set-1">revolución_digital</a> <a href="/tag/seguridad/" class="set-1">seguridad</a> <a href="/tag/servicios/" class="set-1">servicios</a> <a href="/tag/software/" class="set-3">software</a> <a href="/tag/sofware/" class="set-1">sofware</a> <a href="/tag/sostenibilidad/" class="set-1">sostenibilidad</a> <a href="/tag/video/" class="set-1">vídeo</a> <a href="/tag/web/" class="set-1">web</a> <a href="/tag/web-semantica/" class="set-1">web-semántica</a> <a href="/tag/etica/" class="set-1">ética</a>

Compared to the jekyll-tagging examples I only use the tag cloud in that /tag/ page and not in the tag entries pages because it’s a bit annoying when using too much tag words.

And second, probably more interesting, showing the post tags in the html page:

<p class="post-meta">  tags:  <a href="/tag/jekyll/" rel="tag">jekyll</a> </p>

This is relevant because the tagging readme example uses {{ post | tags }} but to work inside the post page you should use {{ page | tags }}.

Yeah, this is not a great post but maybe it can save some time if your adding jekyll-tagging to your web.

07 de December de 2017

OSK update

There’s been a rumor that I was working on improving gnome-shell on-screen keyboard, what’s been up here? Let me show you!

The design has been based on the mockups at, here’s how it looks in English (mind you, hasn’t gone through theming wizards):

The keymaps get generated from CLDR (see here), which helped boost the number of supported scripts (c.f. caribou), some visual examples:

As you can see there’s still a few ugly ones, the layouts aren’t as uniform as one might expect, these issues will be resolved over time.

The additional supported scripts don’t mean much without a way to send those fancy chars/strings to the client. We traditionally were just able to send forged keyboard events, which means we were restricted to keycodes that had a representation in the current keymap. On X11 we are kind of stuck with that, but we can do better on Wayland, this work relies on a simplified version of the text input protocol that I’m doing the last proofreading before proposing as v3 (the branches currently use a private copy). Using an specific protocol allows for sending UTF8 strings independently of the keymap, very convenient too for text completion.

But there are keymaps where CLDR doesn’t dare going, prominent examples are Chinese or Japanese. For those, I’m looking into properly leveraging IBus so pinyin-like input methods work by feeding the results into the suggestions box:

Ni Hao!

The suggestion box even kind of works with the typing booster ibus IM. But you have to explicitly activate it, there is room for improvement here in the future.

And there is of course still bad stuff and todo items. Some languages like Korean neither have a layout, nor input methods that accept latin input, so they are badly handled (read: not at all). It would also be nice to support shape-based input.

Other missing things from the mockups are the special numeric and emoji keymaps, there’s some unpushed work towards supporting those, but I had to draw the line somewhere!

The work has been pushed in mutter, gtk+ and gnome-shell branches, which I hope will get timely polished and merged this cycle 🙂

28 de November de 2017

So we are working in new conferences for 2018

Well, now we can say here (Almería, Spain) we know something about how to do technical conferences and meetings, specially opensource/freesoftware ones. In 2016 and 2017 we co-organized:

And we are really grateful to those people and communities who trusted us to be their host for some days. The reunited experience aimed us to new challenges. So here I just want to share in which conferences I’m currently involved for 2018:

  • SuperSEC 2018, a national (Spain) conference on secure software development, in the orbit of OWASP, to be held in next May. And we are almost ready to open the CFP!

  • GUADEC 2018, the European conference for the GNOME et al. community, from 6 to 11th of July.

But we want mooar, so we are currently biding to host Flock 2018, the annual meeting for the Fedora Project community

Our goal is to host both sister conferences one just after the other, so a lot of people could attend both saving good money.

So, if you are a contributor of any of those communities or just an opensource enthusiast consider this extraordinaire opportunity to match your summer holidays in a nice place for tourism and the July 2018 opensource world meeting point!

Wish us luck :-)

PD: Just updated the definitive dates for GUADEC 2018.

16 de November de 2017

VirtualBox headless en servidor Scaleway

Esta semana he necesitado instalar una máquina virtual de VirtualBox en un servidor Scaleway (de 2.9€/mes) sin entorno gráfico. La primera duda ha sido… ¿se puede? 🙂 Lo primero que necesitamos es instalar VirtualBox. La particularidad es que Scaleway no tiene un kernel vanilla sino que está bastante personalizado para adaptarlo a esta empresa de cloud hosting. Así que hay que descargar el código del kernel, ajustar su configuración a la misma que usa Scaleway, compilar los módulos necesarios y ahora sí, descargar e instalar VirtualBox. Gracias a $DEITY, alguien se ha pegado el curro de preparar un script al efecto. La última parte de ese script instalar una máquina virtual Windows en VirtualBox (yo no lo necesito, he comentado esas líneas).

Bien, ya tenemos el kernel preparado y VirtualBox instalado. Ahora, ¿cómo demonios instalo una máquina virtual a partir de una imagen .iso si no tengo entorno gráfico? Bueno, simulando el entorno gráfico a través de phpVirtualBox (sí, existe tal engendro y funciona de lujo). Además, este tutorial te explica cómo instalarlo paso a paso.

Finalmente, si quieres acceder a la consola que VirtualBox abre por defecto cuando lanzas una máquina virtual Linux, podrás hacerlo vía RDP (Remote Desktop Protocol). Tampoco sabía que una VBoxHeadless abría el puerto 9000 para acceso RDP. Desde tu host remoto puedes ver la consola de VirtualBox vía rdesktop (en Linux) o vía Microsoft Remote Desktop en OSX.

También es posible usar la línea de comandos para lanzar la máquina virtualbox desde una shell en Scaleway (sin necesidad de usar phpVirtualBox, pero es mucho menos intuitivo)

09 de November de 2017

Conferencia en CubaConf 2017 (3)

Tercer y último día en el CubaConf de la Habana. Continúa el gran ambiente y el personal disfrutando a tope:

Y en el cierre de fiesta un extraordinario concierto de versiones clásicas de rock, con enorme poderío, guitarrazos y voces de ángel que nos obsequió la banda de rock habanera Gens. Una experiencia maravillosa:


Y he tenido el honor de hacer llegar y entregar dos placas Icezum Alhambra. Si bien es un acto básicamente testimonial el propósito es divulgar la adopción de una tecnología tan asequible como fascinante y versátil.

A la Unión de informáticos de Cuba, representada por Ailyn Febles:

Y aquí con mi amigo PB entregándole una placa para disposición de la comunidad opensource cubana:

Ojalá sirva de semillero de nuevos hackers y grandes desarrollos #FPGAWars.


Mencionaba en la entrada anterior el potencial de aplicación de Flatpak para la distribución de aplicaciones off-line (por ejemplo a través del sistema de El Paquete y la necesidad de poder empaquetar los runtimes en ficheros independientes aptos para ser compartidos. Me comentan que la compañía Endless está trabajando en algo del estilo. Se hace imperativo estudiar a fondo el tema.

08 de November de 2017

Conferencia en CubaConf 2017 (2)

Seguimos en CubaConf. Es muy muy difícil estar conectado desde Cuba. Los precios son caros para los extranjeros y tremendos para los locales y los lugares de conexión son físicamente puntuales. Por eso tuiteamos tan poco.

Ayer día 8 tuve la oportunidad de hacer dos charlas relámpago que me hicieron mucha ilusión:

  • una presentación de la placa Icezum Alhambra con una brevísima explicación de la utilidad de la tecnología FPGA y de la accesibilidad actual a la misma merced del bajo precio de adquisición de las placas basadas en el chip Lattice ICE40 y, particularmente, la disponibilidad de Yosys, el SDK Verilog 100% opensource que ahora sí tenemos accesible gracias al extraordinario trabajo de ingeniería inversa de Clifford Wolf #FPGAWars;

  • otra brevísima introducción y motivación por la tecnología Flatpak, la tienda de aplicaciones Flathub, su necesario potencial y accesibilidad y, tal vez, por la necesidad cubana de poder distribuir software y contenidos offline obligados por las mencionadas enormes restricciones de acceso a Internet.

Sobre este posible potencial de uso masivo de Flatpak en Linux sería necesario verificar si pueden generarse paquetes de los runtimes de la misma manera que se puede hacer con las aplicaciones. Tal vez sea una funcionalidad disponible pero aún lo ignoro y no he podido verificarlo en Internet. De ser posible podría ser una oportunidad extraordinaria para la difusión de Flatpak aumentando la oferta de aplicaciones disponibles y de favorecer la adopción de soluciones opensource en un país con tantas dificultades que Cuba padece. Veremos a ver.

06 de November de 2017

Conferencia en CubaConf 2017

Hoy he sido honrado abriendo la plenaria inaugural de CubaConf 2017, en La Habana. Y, bueno, creo que ha salido bastante bien. El público dirá. Sobre el tema, pues ha sido el mismo del que he venido hablando en los últimos meses: una perspectiva de la experiencia de estos años en el HackLab Almería. Como explicaba en la conferencia no es que realmente hayamos inventado nada, pero al menos podemos contar detalles y algunas lecciones aprendidas con la práctica. En esta ocasión he titulado la charla «HackLab Almería, un modelo de dinamización tecnológica hiperlocal»; creo que ya he encontrado el título definitivo.

Como acostumbro, las transparencias están disponibles en

captura de pantalla de las transpas

En el futuro esprero poder hablar de nuevos temas pero reconozco que hoy día éste es el que tengo mejor preparado hasta que profundice en nuevas cuestiones. Con todo espero que sea útil.

Más material relacionado en esta web:

Creo que todavía quedan detalles que explicar que he soslayado. Debería encontrar tiempo para profundizar definitivamente, documentar y pasar a otra cosa mariposa. Veremos a ver.

02 de November de 2017

Euskalbar Chromerako


Euskalbar Chromerako eskuragarri daukazu. Ctrl-Shift-U (klik bat, lehen bezala) sakatu eta zuzenean bilatu nahi duzun hitza idatzi. On egin! 🙂

Bertsio luzea

Euskalbar hasiera-hasieratik berridatzi izan behar da (Igor Leturiak egin du -Elhuyar- lan bikain hori, oso denbora laburrean). Zergatik? Firefox-en egindako plugin-ak modu batean programatzen ziren (XUL eta bere API propioa erabiliz) eta Chromen beste era batera (WebExtensions APIa erabiliz). Emaitza: plugin bat garatzerakoan lan bikoitza egin behar zenuen bi nabigatzaileetan erabili nahi izanez gero. Izan ere, MS Edge (lehen Internet Explorer zena) nabigatzaileak ere WebExtensions APIa erabiltzen du orain. Istorio luze bat laburbilduz: Euskalbarreko kodea zakarrontzira bota eta hutsetik, zerotik hasi behar izan da. Eta epe baten barruan: azaroak 1ean WebExtensions APIa erabiltzen ez duten pluginak desgaitu egin dira.

Gauzak horrela, Chrome asko erabiltzen dudanez nire lanerako, Euskalbar-aren kode berria moldatu egin dut Chromen ere erabili ahal izateko. Aldaketa txikiak izan dira. Gero Web Store-ra igo dut. Tartean, kode nagusian ekarpen gutxi batzuk egin ditut… Besteak beste, erabilgarritasunari begira, klik bakar batekin funtzionatu behar du Euskalbar: alegia, Ctrl+Shift+U edo MacOSX-n Cmd+Shit+U sakatuz, automatikoki testua bilatzeko eremua hautatu egin behar da… Firefoxen oraindik ez dugu lortu baina Chromen bai.

Ea horrela erabiltzaile batzuen antsietatea lasaitzen den 🙂


Zergatik aldatu da Euskalbarraren egitura? Lehen toolbar bat genuen eskuragarri aldi oro…

Bai, WebExtensions APIak oraindik ez du hori egiten uzten. Utziko digu? Agian bai, agian ez… Baina APIak aukera hori ematen badu noizbait, ziur Euskalbarran sartzen dela

Chromerako eta Firefoxerako bertsioak berdinak dira?

Bai, oinarrian, iturburu-kode bera erabiltzen dute. Noski, berezitasun txiki batzuk ere aurkituko dituzu, baina xehetasunak izango dira, inoiz ez funtzionalitate berriak.

26 de October de 2017

An improvised list of opensource conference management software

From time to time I check the available options for management conferences opensource licensed. My requirements would be basically:

  • management of the call for papers process:
    • authors registration
    • submiting of proposals and documents
    • review and approval process
  • bonus extra:
    • management and publish the conference program
    • export the program in some standard recognized formats (BARF, calendar, etc)
    • some mobile user application able to show and maanage the program calendar
  • heaven gift:
    • event management features for the conference attendance: registration, mailing, etc (think as an equivalent of Eventbrite).

As far I found:

  • Pentabarf, which seems obsolete these days
  • FRAB, a Pentabarf succesor, used at FrosCon, CCC.
  • COD, based on Drupal and used at DrupalCon’s, I guess.
  • CFP-Devoxx used at several Devoxx conferences and others.
  • OSEM, used at OpenSUSE, Owncloud, PGConf US and others.
  • Canonical Summit, this link points to the DebConf fork used in 2014 an 2015.
  • Wafer, Django based, currently used by DebConf.
  • OCS, made in PHP, in Gunnar words: «a long-running system, aimed at academic conferences, with a large install base».
  • symposion (plus registrasion), used at North Bay Python.
  • OpenConferenceWare, used at Open Source Bridge.

Other «raw» information:

If you know other applications you think should be added please feedback through comments. Same if you are aware of a better list than this or if you have a better list of application requirements.

PS: This post is just a draft so can be subjets of future adds and corrections.

PD: Added the links provided by Gunnar.
PPD: Added links to symposion and OpenConferenceWare.

13 de October de 2017

Instalar soporte yaml-dev en PHP para OSX

Receta rápida para instalar soporte YAML en PHP 5.6 para OSX Sierra.

Añadir tap de php a brew:

$ brew tap homebrew/homebrew-php

Instalar librería dinámica

$ brew install php56-yaml

Comprobar ruta de instalación:

$ ls -l /usr/local/opt/php56-yaml
/usr/local/opt/php56-yaml -> ../Cellar/php56-yaml/1.3.0

Crear php.ini para poder editarlo e indicar que active la carga de

$ sudo cp /etc/php.ini.default /etc/php.ini

Activar extensión en php.ini:

$ sudo vi /etc/php.ini

Comprobar que ha cargado (reiniciar Apache si el soporte YAML se quiere para el servidor web… en mi caso no hacía falta, dado que el soporte se requería para línea de comandos):

$ php -i "(command-line 'phpinfo()')" | grep -i yaml
LibYAML Support => enabled
LibYAML Version => 0.1.7

26 de September de 2017

Video demostración de introducción a git

Unos modestos vídeos que tenía hechos pero sin publicar de introducción al uso de Git. La demo fue preparada para una clase de introducción a Git y a los sistemas de control de versiones distribuidos en la asignatura de «Integración de sistemas software» en la Universidad de Almería en 2015. Se propone trabajar con recetas para que un grupo de estudiantes experimente el clonado, modificación y propuesta de parches de repositorios de terceros y de aceptación de parches para su propio repo. No son gran cosa pero igual les sirven a alguien.

Creación del repositorio personal de recetas:

Creo un repo en Github:

Vinculo un repo local al que hemos creado en Github:

Verifico que los cambios locales han sido añadidos al repo en Github:

Creo un repositorio propio replicando el de una tercera persona:

«Clono» mi nuevo repositorio remoto creando una copia local:

Modifico un repositorio de una tercera persona:

Consulta del histórico de cambios:

Ejemplo de pull-request:

Uso de ramas:

18 de September de 2017

HackIt! 2017 Level 5. Desempolvando el viejo MacOS

Ojo, ¿qué ha pasado con el nivel 4? W0pr lo ha descrito a la perfección usando Radare. La verdad es que es una delicia leer sus posts. En lo que a DiarioLinux respecta, lo solucionamos usando IDA Pro (en casa del herrero cuchara de palo) Bueno, realmente lo solucionó Kotxerra mientras nosotros dormíamos plácidamente a las 4AM. Echadle la culpa a él, por ser tan eficiente 😉 De hecho, nos impartió un mini-taller sobre la solución. Lo tengo documentado en uno de los backups que hice, así que en cuanto tenga un rato, lo desempolvaré para dar otra posible solución.
Lo que me interesa ahora es mostrar cómo abordamos el nivel 5. Poca gente lo solucionó… si no recuerdo mal (y el pantallazo de este post así lo ratifica), sólo W0pr (hats-off again) Por lo que comenté con ellos, hicieron análisis estático del código desensamblado. Nosotros seguimos otra vía, seguramente más tortuosa (y obviamente con peores resultados) pero interesante como ejercicio. Pero vayamos por partes. ¿De qué trata el level 5, Infinite Loop? Nos pasan un archivo infiniteloop.sit que hay que crackear. Los ficheros .sit son archivos comprimidos (Stuffit) que guardaban el ejecutable de los viejos MacOS 9. Ojo al parche, porque lo primero que se nos ocurrió fue encontrar un descompresor para Linux, ejecutar un emulador de MacOS9 sobre el ejecutable y depurar. No íbamos mal encaminados… Salvo que los .sit guardan datos de recursos (conocidos como “resource forks“) y metadatos del ejecutable y, sin ellos, el ejecutable realmente no sirve para nada. Nótese la “gracia” del título del level (A fork in the road…)

Long story short, encontramos la forma de instalar un emulador de MacOS 9 (la versión 9.2, no valía ni la 9.0, ni la 9.1, por experiencia) sobre qemu… sobre MacOSX Sierra :), tras muchas vueltas, usando SheepShaver. Cuando digo muchas vueltas estoy hablando de unas 5-6 horas, con gente muy experimentada. Fallaba por todos los lados: la instalación, el sistema de archivos, cuelgues en medio de la instalación, and the kitchen sink.

Pero finalmente, conseguimos instalarlo, subir el .sit.hqx y ejecutar. ¡Magia!

Cuando lo comentamos con @marcan42 vimos brillo en sus ojos: “¡Por fin alguien que lo ha ejecutado!”. Y no me extraña, la verdad. Sin embargo, las cosas no eran tan fáciles como nos esperábamos…

Lo primero que llama la atención es que, tras teaclear el código de unlock, asumíamos que podríamos cancelar y volver a intentarlo con otro código (básico para poder depurar). Pero no, la ventana de FAIL se quedaba fija en pantalla y NO había forma de quitarla, teniendo que resetear la máquina entera (OMFG!)
Lo siguiente que intentamos fue instalar un debugger en MacOS 9.2. Como puede observarse en la primera pantalla, donde se ve el logo de MacOS y debajo el texto “Debugger installed”, tenemos instalado MacsBug, un debugger un tanto arcaico para Motorola 68000. La gracia es que para activar el debugger, es necesario pulsar la combinación de teclas “Command-Power” (Programmer’s key), lo cual, en qemu, y con un teclado de un MBP moderno, no logré hacerlo ¯\_(ツ)_/¯

09 de September de 2017

WebDriver support in WebKitGTK+ 2.18

WebDriver is an automation API to control a web browser. It allows to create automated tests for web applications independently of the browser and platform. WebKitGTK+ 2.18, that will be released next week, includes an initial implementation of the WebDriver specification.

WebDriver in WebKitGTK+

There’s a new process (WebKitWebDriver) that works as the server, processing the clients requests to spawn and control the web browser. The WebKitGTK+ driver is not tied to any specific browser, it can be used with any WebKitGTK+ based browser, but it uses MiniBrowser as the default. The driver uses the same remote controlling protocol used by the remote inspector to communicate and control the web browser instance. The implementation is not complete yet, but it’s enough for what many users need.

The clients

The web application tests are the clients of the WebDriver server. The Selenium project provides APIs for different languages (Java, Python, Ruby, etc.) to write the tests. Python is the only language supported by WebKitGTK+ for now. It’s not yet upstream, but we hope it will be integrated soon. In the meantime you can use our fork in github. Let’s see an example to understand how it works and what we can do.

from selenium import webdriver

# Create a WebKitGTK driver instance. It spawns WebKitWebDriver 
# process automatically that will launch MiniBrowser.
wkgtk = webdriver.WebKitGTK()

# Let's load the WebKitGTK+ website.

# Find the GNOME link.
gnome = wkgtk.find_element_by_partial_link_text("GNOME")

# Click on the link.

# Find the search form. 
search = wkgtk.find_element_by_id("searchform")

# Find the first input element in the search form.
text_field = search.find_element_by_tag_name("input")

# Type epiphany in the search field and submit.

# Let's count the links in the contents div to check we got results.
contents = wkgtk.find_element_by_class_name("content")
links = contents.find_elements_by_tag_name("a")
assert len(links) > 0

# Quit the driver. The session is closed so MiniBrowser 
# will be closed and then WebKitWebDriver process finishes.

Note that this is just an example to show how to write a test and what kind of things you can do, there are better ways to achieve the same results, and it depends on the current source of public websites, so it might not work in the future.

Web browsers / applications

As I said before, WebKitWebDriver process supports any WebKitGTK+ based browser, but that doesn’t mean all browsers can automatically be controlled by automation (that would be scary). WebKitGTK+ 2.18 also provides new API for applications to support automation.

  • First of all the application has to explicitly enable automation using webkit_web_context_set_automation_allowed(). It’s important to know that the WebKitGTK+ API doesn’t allow to enable automation in several WebKitWebContexts at the same time. The driver will spawn the application when a new session is requested, so the application should enable automation at startup. It’s recommended that applications add a new command line option to enable automation, and only enable it when provided.
  • After launching the application the driver will request the browser to create a new automation session. The signal “automation-started” will be emitted in the context to notify the application that a new session has been created. If automation is not allowed in the context, the session won’t be created and the signal won’t be emitted either.
  • A WebKitAutomationSession object is passed as parameter to the “automation-started” signal. This can be used to provide information about the application (name and version) to the driver that will match them with what the client requires accepting or rejecting the session request.
  • The WebKitAutomationSession will emit the signal “create-web-view” every time the driver needs to create a new web view. The application can then create a new window or tab containing the new web view that should be returned by the signal. This signal will always be emitted even if the browser has already an initial web view open, in that case it’s recommened to return the existing empty web view.
  • Web views are also automation aware, similar to ephemeral web views, web views that allow automation should be created with the constructor property “is-controlled-by-automation” enabled.

This is the new API that applications need to implement to support WebDriver, it’s designed to be as safe as possible, but there are many things that can’t be controlled by WebKitGTK+, so we have several recommendations for applications that want to support automation:

  • Add a way to enable automation in your application at startup, like a command line option, that is disabled by default. Never allow automation in a normal application instance.
  • Enabling automation is not the only thing the application should do, so add an automation mode to your application.
  • Add visual feedback when in automation mode, like changing the theme, the window title or whatever that makes clear that a window or instance of the application is controllable by automation.
  • Add a message to explain that the window is being controlled by automation and the user is not expected to use it.
  • Use ephemeral web views in automation mode.
  • Use a temporal user profile in application mode, do not allow automation to change the history, bookmarks, etc. of an existing user.
  • Do not load any homepage in automation mode, just keep an empty web view (about:blank) that can be used when a new web view is requested by automation.

The WebKitGTK client driver

Applications need to implement the new automation API to support WebDriver, but the WebKitWebDriver process doesn’t know how to launch the browsers. That information should be provided by the client using the WebKitGTKOptions object. The driver constructor can receive an instance of a WebKitGTKOptions object, with the browser information and other options. Let’s see how it works with an example to launch epiphany:

from selenium import webdriver
from selenium.webdriver import WebKitGTKOptions

options = WebKitGTKOptions()
options.browser_executable_path = "/usr/bin/epiphany"
epiphany = webdriver.WebKitGTK(browser_options=options)

Again, this is just an example, Epiphany doesn’t even support WebDriver yet. Browsers or applications could create their own drivers on top of the WebKitGTK one to make it more convenient to use.

from selenium import webdriver
epiphany = webdriver.Epiphany()


During the next release cycle, we plan to do the following tasks:

  • Complete the implementation: add support for all commands in the spec and complete the ones that are partially supported now.
  • Add support for running the WPT WebDriver tests in the WebKit bots.
  • Add a WebKitGTK driver implementation for other languages in Selenium.
  • Add support for automation in Epiphany.
  • Add WebDriver support to WPE/dyz.

23 de August de 2017

Talleres Flatpak en Almería

En el HackLab Almería hemos iniciado una serie de sesiones de hacking para aprender en grupo a manejarse con Flatpak:

Personalmente mi plan es sacar adelante el paquete para OmegaT y finalmente descartar el que tengo abandonado en Fedora.

El grupo confía en publicar unos cuantos nuevos, por ejemplo Banshee. Y personalmente, si le pillo el tranquillo a empaquetar aplicaciones java, creo que buscaré el rato para empaquetar Freeplane y así dar el salto definitivo desde el viejo Freemind.

Veremos a ver cómo se nos da.

22 de August de 2017

Tracker requires SQLite >= 3.20 to be compiled with –enable-fts5

Tracker is one of these pieces of software that get no special praise when things work, but you wake up to personal insults on bugzilla when they don’t, today is one of those days.

Several distros have been eager to push SQLite 3.20.0 still hot from the oven to their users, apparently ignoring the API and ABI incompatibilities that are described in the changelog. These do hit Tracker, and are only made visible at runtime.

Furthermore, there is further undocumented ABI breakage that makes FTS5 modules generated from pre/post 3.20.0 SQLite code backward and forward incompatible with the other versions. Tracker used to ship a copy of the FTS5 module, but this situation is not tenable anymore.

The solution then? Making it mandatory that SQLite >= 3.20.0 must have FTS5 builtin. The just released Tracker 1.12.3 and 1.99.3 will error hard if that is not the case.

I’ve just sent this mail to distributor-list:

Hi all,

Sqlite 3.20.0 broke API/ABI for Tracker purposes. The change described in point 3 at is not only backwards incompatible, but also brings backwards and forwards incompatibilities with standalone FTS5 modules like Tracker ships [1], all of those are only visible at runtime [2].

FTS5 modules generated from SQLite < 3.20.0 won't work with >= 3.20.0, and the other way around. Since it's not tenable to ship multiple FTS5 module copies for pre/post 3.20.0, Tracker shall now make it a hard requirement that SQLite is compiled with builtin FTS5 (--enable-fts5) if SQLite >= 3.20.0 is found. The current Tracker FTS5 module is kept for older SQLite versions.

This change applies to Tracker >=1.12.3 and >=1.99.3. I don't know if any distro pushed SQLite 3.20.0 in combination with a Tracker that is older than that, but it will be just as broken, those would require additional patches from the tracker-1.12 branch besides the described change.

Please handle this promptly, wherever there's sqlite 3.20.0 without builtin FTS5 and/or tracker <= 1.12.2/1.99.2, there's unhappy Tracker users.


[1] Generated as described in

So, if you see errors about “TrackerTokenizer” something, please contact your distro packagers. I’ll close further incoming bugs as NOTGNOME.

16 de August de 2017

Going to retire Fedora's OmegaT package

OmegaT logo

Well, time has come and I must face my responsability on this.

My first important package in Fedora was for OmegaT. AFAIK OmegaT is the best FLOSS computer aid translator tool available. With the time OmegaT has been enjoying a very active development with a significant (to me) handicap: new releases adds new features with new dependencies on java libraries not available in Fedora. As you perfectly know, updating the package requires to add each one of those libraries as new packages. But I can’t find the time for such that effort. That’s the reason the last Fedora version is 2.6.3 and the lasts at upstream are 3.6.0 / 4.1.2.

So, I give up. I want to retire the package from Fedora because I’m sure I will not be able to update it anymore.

I’ll wait some days waiting someone expressing their interest on taking ownership. Otherwise I’ll start the retirement process.

PS: OTOH I plan to publish OmegaT as a flatpak package via Flathub. Seems to me it would be a lot easier to maintain that way. I’m aware Flathub is out of the scope of Fedora :-/

PPS: I send an announcement to the Fedora devel mailing list.

PPPS: detailed announcement and plan.

Recordando el planeta Chitón

Planeta Chitón, tus amigos no te olvidan.

09 de August de 2017

At GUADEC 2017 in Manchester


And not only I have participated in the Akademy 2017 conference but got to GUADEC 2017 all in the same week! And, god, I really loved it too because it’s the first GUADEC I attend since The Hague in 2010 and I loved to meet again with old friends of the Hispanic community. Important to say I missed a lot of you, guys. Hope we’ll fix this next year ;-)

I should acknowledge the travel sponsorship by the GNOME Foundation and the GNOME Hispano association. Without them I couldn’t attend this year.

And the main reason to travel to Manchester has been the announcement of the hosting city for GUADEC 2018: Almería. We competed againts the city of Lyon leaded by the hacker extraordinaire Bastien Nocera but finally the GNOME board honored us with the privilege. We put a lot of work on the Almería candicacy but haven’t been easy to compete with the Lyon bid. And thanks to Bastien and team too for their commitment too.

At Manchester’s GUADEC

Organization team had made a greeeat work of love with everything I’m aware. The venue and the dormitory were really very good. The spaces for workshops ideal too. The number of audience attendees this year has beaten the record of the last editions and the quality of speakers at the best level you could expect.

Hopper Room Brooks Building

To point some defect just say the dormitory mattresses were not as good as the rest of the facilities ;-)

About conference contents, trying to remark something, to me has been:

GNOME 20th birthday

Yeah, 20 years since GNOME was founded:

(and by the way, seems this has been my most popular tweet ever!)

And Jonathan Bladford delighted us with a retrospective of all these years:

And we had not only a 20 years party but the celebration of founder Federico Mena’s birthday, so he made the honours cuting the celebration cake:

And yes, Miguel, we missed you too :-)


To me this has been the Flatpak conference. Most of my Linux experience has been related to software packaging and distro integration but didn’t have the time to deep into Flatpak use and at Manchester had the oportunity to dedicate time to learn about the future of opensource packaging:

And at the workshops session Flatpak leader Alex Larsson among others guided us in our first steps on this technology. To me was the continuation of Aleix Pol flatpak masterclass at Akademy 2017.

About my interest in Flatpak, my main goal is migrating the packaging I “keep” at Fedora of the great OmegaT translation memory tool (see OmegaT) to flatpak and, probably, publishing at the Flathub application store. The main reason to me is I can’t find the time to fullfill all the Fedora package requirements with all the dependencies OmegaT needs. To me it’s a personal experiment I hope to finish before the end of the summer both to get familiar with Flatpak and to retake my modest commitment to OmegaT project.


Yeah: I brought to Almería dozens and dozens of GNOME stickers:

I’ll be sharing all of them in the comming HackLab Almería meetings.


And my personal most emotive moment was the announcement of the city hosting next GUADEC 2018:

We are very grateful to the GNOME board and community for their trust. We are confident we’ll provide a nice Almería experience :-)


It has been a great conference. For many of the attendants have been their best GUADEC in many years. And I don’t doubt about this :-)

GUADEC 2017 group photo

If you want to check pictures:

So GUADEC, «we must say Adios! until we see Almeria once again»:


PD: just added the reference to the GUADEC 2018 announcement.

07 de August de 2017

Yeah, we host Akademy 2017 in Almería

Last weeks have been really busy with full inmersion in opensource activites. Here in Almería we had the honour to host the Akademy 2017, the annual KDE project conference and meeting. I’m not really involved in KDE development but had been an extraordinare pleasure being their host. Very kind people very easy to work with in all aspects. Is meeting new people a great surprise? Not at all when you know how opensource development projects and people usually are. But I love to remark how a delight has been to work with them.

Hope they loved being in Almería as well :-)

About the conference we had collected some resources:


Akademy 2017 has been posible just and thanks an extraordinarie team of kind and hardworking people, specially the core team: Aleix Pol, Albert Astals, Kenny Duffus and Lydia Pintscher. And not to forget the necessary help of local volunteers José María Martínez, Juanjo Salvador, Rafa Aybar and Cristóbal Saraiba.

We should thanks too to Francisco Gil and the University of Almería because their help was decisive for make the event happen.

And special acknowledge to Rubén Gómez who has been crazy enough to propose Almería to host the 2017 edition and the main driving force for the local organization team.

As said, a pleasure and an honour!

04 de August de 2017

Back from GUADEC

After spending a few days in Manchester with other fellow GNOME hackers and colleagues from Endless, I’m finally back at my place in the sunny land of Surrey (England) and I thought it would be nice to write some sort of recap, so here it is:

The Conference

Getting ready for GUADECI arrived in Manchester on Thursday the 27th just on time to go to the pre-registration event where I met the rest of the gang and had some dinner, and that was already a great start. Let’s forget about the fact that I lost my badge even before leaving the place, which has to be some type of record (losing the badge before the conference starts, really?), but all in all it was great to meet old friends, as well as some new faces, that evening already.

Then the 3 core days of GUADEC started. My first impression was that everything (including the accommodation at the university, which was awesome) was very well organized in general, and the venue make it for a perfect place to organize this type of event, so I was already impressed even before things started.

I attended many talks and all of them were great, but if I had to pick my 5 favourite ones I think those would be the following ones, in no particular order:

  • The GNOME Way, by Allan: A very insightful and inspiring talk, made me think of why we do the things we do, and why it matters. It also kicked an interesting pub conversation with Allan later on and I learned a new word in English (“principled“), so believe me it was great.
  • Keynote: The Battle Over Our Technology, by Karen: I have no words to express how much I enjoyed this talk. Karen was very powerful on stage and the way she shared her experiences and connected them to why Free Software is important did leave a mark.
  • Mutter/gnome-shell state of the union, by Florian and Carlos: As a person who is getting increasingly involved with Endless’s fork of GNOME Shell, I found this one particularly interesting. Also, I found it rather funny at points, specially during “the NVIDIA slide”.
  • Continuous: Past, Present, and Future, by Emmanuele: Sometimes I talk to friends and it strikes me how quickly they dismiss things as CI/CD as “boring” or “not interesting”, which I couldn’t disagree more with. This is very important work and Emmanuele is kicking ass as the build sheriff, so his talk was very interesting to me too. Also, he’s got a nice cat.
  • The History of GNOME, by Jonathan: Truth to be told, Jonathan already did a rather similar talk internally in Endless a while ago, so it was not entirely new to me, but I enjoyed it a lot too because it brought so many memories to my head: starting with when I started with Linux (RedHat 5.2 + GNOME pre-release!), when I used GNOME 1.x at the University and then moved to GNOME 2.x later on… not to mention the funny anecdotes he mentioned (never imagined the phone ringing while sleeping could be a good thing). Perfectly timed for the 20th anniversary of GNOME indeed!

As I said, I attended other talks too and all were great too, so I’d encourage you to check the schedule and watch the recordings once they are available online, you won’t regret it.

Closing ceremony

And the next GUADEC will be in… Almería!

One thing that surprised me this time was that I didn’t do as much hacking during the conference as in other occasions. Rather than seeing it as a bad thing, I believe that’s a clear indicator of how interesting and engaging the talks were this year, which made it for a perfect return after missing 3 edition (yes, my last GUADEC was in 2013).

All in all it was a wonderful experience, and I can thank and congratulate the local team and the volunteers who run the conference this year well enough, so here’s is a picture I took where you can see all the people standing up and clapping during the closing ceremony.

Many thanks and congratulations for all the work done. Seriously.

The Unconference

After 3 days of conference, the second part started: “2 days and a bit” (I was leaving on Wednesday morning) of meeting people and hacking in a different venue, where we gathered to work on different topics, plus the occasional high-bandwith meeting in person.

GUADEC unconferenceAs you might expect, my main interest this time was around GNOME Shell, which is my main duty in Endless right now. This means that, besides trying to be present in the relevant BoFs, I’ve spent quite some time participating of discussions that gathered both upstream contributors and people from different companies (e.g. Endless, Red Hat, Canonical).

This was extremely helpful and useful for me since, now we have rebased our fork of GNOME Shell 3.22, we’re in a much better position to converge and contribute back to upstream in a more reasonable fashion, as well as to collaborate implementing new features that we already have in Endless but that didn’t make it to upstream yet.

And talking about those features, I’d like to highlight two things:

First, the discussion we held with both developers and designers to talk about the new improvements that are being considered for both the window picker and the apps view, where one of the ideas is to improve the apps view by (maybe) adding a new grid of favourite applications that the users could customize, change the order… and so forth.

According to the designers this proposal was partially inspired by what we have in Endless, so you can imagine I would be quite happy to see such a plan move forward, as we could help with the coding side of things upstream while reducing our diff for future rebases. Thing is, this is a proposal for now so nothing is set in stone yet, but I will definitely be interested in following and participating of the relevant discussions regarding to this.

Second, as my colleague Georges already vaguely mentioned in his blog post, we had an improvised meeting on Wednesday with one of the designers from Red Hat (Jakub Steiner), where we discussed about a very particular feature upstream has wanted to have for a while and which Endless implemented downstream: management of folders using DnD, right from the apps view.

This is something that Endless has had in its desktop since the beginning of times, but the implementation relied in a downstream-specific version of folders that Endless OS implemented even before folders were available in the upstream GNOME Shell, so contributing that back would have been… “interesting”. But fortunately, we have now dropped that custom implementation of folders and embraced the upstream solution during the last rebase to 3.22, and we’re in a much better position now to contribute our solution upstream. Once this lands, you should be able to create, modify, remove and use folders without having to open GNOME Software at all, just by dragging and dropping apps on top of other apps and folders, pretty much in a similat fashion compared to how you would do it in a mobile OS these days.

We’re still in an early stage for this, though. Our current solution in Endless is based on some assumptions and tools that will simply not be the case upstream, so we will have to work with both the designers and the upstream maintainers to make this happen over the next months. Thus, don’t expect anything to land for the next stable release yet, but simply know we’ll be working on it  and that should hopefully make it not too far in the future.

The Rest

This GUADEC has been a blast for me, and probably the best and my most favourite edition ever among all those I’ve attended since 2008. Reasons for such a strong statement are diverse, but I think I can mention a few that are clear to me:

From a personal point of view, I never felt so engaged and part of the community as this time. I don’t know if that has something to do with my recent duties in Endless (e.g. flatpak, GNOME Shell) or with something less “tangible” but that’s the truth. Can’t state it well enough.

From the perspective of Endless, the fact that 17 of us were there is something to be very excited and happy about, specially considering that I work remotely and only see 4 of my colleagues from the London area on a regular basis (i.e. one day a week). Being able to meet people I don’t regularly see as well as some new faces in person is always great, but having them all together “under the same ceilings” for 6 days was simply outstanding.

GNOME 20th anniversary dinner

GNOME 20th anniversary dinner

Also, as it happened, this year was the celebration of the 20th anniversary of the GNOME project and so the whole thing was quite emotional too. Not to mention that Federico’s birthday happened during GUADEC, which was a more than nice… coincidence? 🙂 Ah! And we also had an incredible dinner on Saturday to celebrate that, couldn’t certainly be a better opportunity for me to attend this conference!

Last, a nearly impossible thing happened: despite of the demanding schedule that an event like this imposes (and I’m including our daily visit to the pubs here too), I managed to go running every single day between 5km and 10km, which I believe is the first time it happened in my life. I definitely took my running gear with me to other conferences but this time was the only one I took it that seriously, and also the first time that I joined other fellow GNOME runners in the process, which was quite fun as well.

Final words

I couldn’t finish this extremely long post without a brief note to acknowledge and thank all the many people who made this possible this year: the GNOME Foundation and the amazing group of volunteers who helped organize it, the local team who did an outstanding job at all levels (venue, accomodation, events…), my employer Endless for sponsoring my attendance and, of course, all the people who attended the event and made it such an special GUADEC this year.

Thank you all, and see you next year in Almería!

Credit to Georges Stavracas

29 de July de 2017

HackIt! 2017 Level 3

usbmon es una funcionalidad ofrecida por el kernel Linux para recolectar trazas de entrada/salida que usen el bus USB. El level 3 nos plantea unas trazas de texto plano obtenidas a través de usbmon.

Tras mucho analizar, allá donde esperábamos información sobre el dispositivo USB en concreto, vimos que había tres líneas censuradas.

Intentamos también convertir la salida ascii de usbmon a formato .pcap para manejar el dump más cómodamente desde wireshark, pero no lo conseguimos. Así que tocaba parsear el fichero de texto. Antes de ponernos a programar un parser, buscamos alguno ya existente y encontramos varias alternativas, entre ellas este script en python que además generaba su salida en un bonito HTML.

Lo intentamos por todos los medios. Lo primero fue extraer los datos de tipo isochronous e interpretarlos: como raw image, como audio, como señal de unos y ceros… agua.

Hasta muchas horas después de planteado el reto, nadie lo solucionó. Si no recuerdo mal, el lunes 24 pusieron la primera pista: “Audio parece, pero audio no es.” que nos dejó igual que estábamos (bueno, redujo el espacio de búsqueda, eso sí). La madrugada del lunes al martes (a eso de las 4am) se publicó la segunda pista, pero nos pilló exhaustos, y a la mayoría dormidos. La segunda pista restauraba parte de las líneas del dump eliminadas, facilitando información sobre el dispositivo que las generó. Un láser. Seguramente se nos tenía que haber ocurrido antes, conociendo al autor de la prueba y su afición por estos dispositivos

No sé si w0pr solucionó este reto con esta segunda pista o con la primera… pero ¡lo solucionó! Así que espero que publiquen el write-up para actualizar este post con el enlace directo.

UPDATE (30/07/2017): w0pr ha publicado la solución completa a este reto.

28 de July de 2017

HackIt! 2017. Level 2

¡Y vamos a por bingo! El nivel 2 nos pide que encontremos el secreto que esconde una captura de tráfico de red. Realmente esto me sonaba a otra prueba del hackit… de 2007 (!). Abriendo la captura con un wireshark moderno, vemos tramas bluetooth. Probablement de algún dispositivo que envía datos, y en concreto, este SoundCore-Sport-XL-Bluetooth-Speaker.

Seleccionamos Telephony / RTP / RTP Streams…

Y oímos la señal, pero muy distorsionada. Aquí tuvimos un problema: queríamos exportar el audio para poderlo tratar con Audacity. Pero oh, sorpresa, no era posible:

Aquí se nos ocurrió una solución tal vez no muy “ortodoxa”, pero que funcionó perfectamente 🙂 Abrimos Camtasia, lo configuramos para grabar sonido del sistema y le dimos al Play stream en Wireshark. Exportamos luego el resultado de la grabación a formato mp4 (en Camtasia) y lo abrimos con Audacity, aplicándole el efecto de cambio de speed, encontrando la solución (fuaaaa!) 🙂

27 de July de 2017

HackIt! 2017. Soluciones que me quitan de las manos

Gooood morning Vietnam! Siguiendo con la tradición (2014,2013,2012,2010,2009,2008,2007) que lamentablemente dejé de lado en 2015 y 2016, quiero volver a publicar soluciones para el HackIt! de la Euskal Encounter de este año (25 ediciones ya, OMG, cómo pasa el tiempo…) Lo primero, saludos a @marcan42 e @imobilis, por organizar el evento un año más. Lo segundo, pedirle a @marcan42 que suba las pruebas que faltan (have you stopped being lazy yet?) };-)

Vamos con el Level 1. Facilito, pero más complejo que otros años para ser una prueba de calentamiento.

Nos presentan un mapa de reservas de sitios… algo modificado:

Si abrimos el código y miramos las diferencias con el mapa de ocupación original, vemos algo raro (¡en el código de producción!):

Funciones xor, magic numbers y uno de ellos 31337 (eleet!, elite)… huele a que vamos por el buen camino 😉

La idea consistió en poner un breakpoint en la función w() de producción y otro en el de la prueba. En la ventana de la prueba, recogemos el array de datos y lo inyectamos en el de producción, para ver qué pintaba en pantalla esa función. ¡Sorpresa!

UPDATE: los compañeros de w0pr han empezado a publicar un write-up con sus impresiones. ¡Imprescindible lectura!

22 de July de 2017

Running GUI applications with Docker in OSX

A quick guide on how to run docker containers requiring a GUI with Mac and XQuartz. First, you’ll need to download and install XQuartz. Run it.
Now, start socat to expose local xquartz socket on a TCP port:

$ socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\"

Pass the display to container :

docker run -e DISPLAY= pygame xclock

I’m assuming that is your vboxnet IP address and that “pygame” is the name of the docker image that contains your GUI application (in this case, my container’s name is pygame and the GUI is xclock)

For security purposes, you can use socat option range=/32 to filter the IP address (allowing only your primary IP to connect). Check also your XQuartz Preferences / Security tab:


You can also launch a bash shell first and from there, run your GUI apps.


04 de July de 2017

Endless OS 3.2 released!

We just released Endless OS 3.2 to the world after a lot of really hard work from everyone here at Endless, including many important changes and fixes that spread pretty much across the whole OS: from the guts and less visible parts of the core system (e.g. a newer Linux kernel, OSTree and Flatpak improvements, updated libraries…) to other more visible parts including a whole rebase of the GNOME components and applications (e.g. mutter, gnome-settings-daemon, nautilus…), newer and improved “Endless apps” and a completely revamped desktop environment.

By the way, before I dive deeper into the rest of this post, I’d like to remind you thatEndless OS is a Operating System that you can download for free from our website, so please don’t hesitate to check it out if you want to try it by yourself. But now, even though I’d love to talk in detail about ALL the changes in this release, I’d like to talk specifically about what has kept me busy most of the time since around March: the full revamp of our desktop environment, that is, our particular version of GNOME Shell.

Endless OS 3.2 as it looks in my laptop right now

Endless OS 3.2 as it looks in my laptop right now

If you’re already familiar with what Endless OS is and/or with the GNOME project, you might already know that Endless’s desktop is a forked and heavily modified version of GNOME Shell, but what you might not know is that it was specifically based on GNOME Shell 3.8.

Yes, you read that right, no kidding: a now 4 years old version of GNOME Shell was alive and kicking underneath the thousands of downstream changes that we added on top of it during all that time to implement the desired user experience for our target users, as we iterated based on tons of user testing sessions, research, design visions… that this company has been working on right since its inception. That includes porting very visible things such as the “Endless button”, the user menu, the apps grid right on top of the desktop, the ability to drag’n’drop icons around to re-organize that grid and easily manage folders (by just dragging apps into/out-of folders), the integrated desktop search (+ additional search providers), the window picker mode… and many other things that are not visible at all, but that are required to deliver a tight and consistent experience to our users.

Endless button showcasing the new "show desktop" functionality

Endless button showcasing the new “show desktop” functionality

Aggregated system indicators and the user menu

Of course, this situation was not optimal and finally we decided we had found the right moment to tackle this situation in line with the 3.2 release, so I was tasked with leading the mission of “rebasing” our downstream changes on top of a newer shell (more specifically on top of GNOME Shell 3.22), which looked to me like a “hell of a task” when I started, but still I didn’t really hesitate much and gladly picked it up right away because I really did want to make our desktop experience even better, and this looked to me like a pretty good opportunity to do so.

By the way, note that I say “rebasing” between quotes, and the reason is because the usual approach of taking your downstream patches on top of a certain version of an Open Source project and apply them on top of whatever newer version you want to update to didn’t really work here: the vast amount of changes combined with the fact that the code base has changed quite a bit between 3.8 and 3.22 made that strategy fairly complicated, so in the end we had to opt for a combination of rebasing some patches (when they were clean enough and still made sense) and a re-implementation of the desired functionality on top of the newer base.

Integrated desktop search

The integrated desktop search in action

New implementation for folders in Endless OS (based on upstream’s)

As you can imagine, and especially considering my fairly limited previous experience with things like mutter, clutter and the shell’s code, this proved to be a pretty difficult thing for me to take on if I’m truly honest. However, maybe it’s precisely because of all those things that, now that it’s released, I look at the result of all these months of hard work and I can’t help but feel very proud of what we achieved in this, pretty tight, time frame: we have a refreshed Endless OS desktop now with new functionality, better animations, better panels, better notifications, better folders (we ditched our own in favour of upstream’s), better infrastructure… better everything!.

Sure, it’s not perfect yet (no such a thing as “finished software”, right?) and we will keep working hard for the next releases to fix known issues and make it even better, but what we have released today is IMHO a pretty solid 3.2 release that I feel very proud of, and one that is out there now already for everyone to see, use and enjoy, and that is quite an achievement.

Removing and app by dragging and dropping it into the trash bin

Now, you might have noticed I used “we” most of the time in this post when referring to the hard work that we did, and that’s because this was not something I did myself alone, not at all. While it’s still true I started working on this mostly on my own and that I probably took on most of the biggest tasks myself, the truth is that several other people jumped in to help with this monumental task tackling a fair amount of important tasks in parallel, and I’m pretty sure we couldn’t have released this by now if not because of the team effort we managed to pull here.

I’m a bit afraid of forgetting to mention some people, but I’ll try anyway: many thanks to Cosimo Cecchi, Joaquim Rocha, Roddy Shuler, Georges Stavracas, Sam Spilsbury, Will Thomson, Simon Schampijer, Michael Catanzaro and of course the entire design team, who all joined me in this massive quest by taking some time alongside with their other responsibilities to help by tackling several tasks each, resulting on the shell being released on time.

The window picker as activated from the hot corner (bottom – right)

Last, before I finish this post, I’d just like to pre-answer a couple of questions that I guess some of you might have already:

Will you be proposing some of this changes upstream?

Our intention is to reduce the diff with upstream as much as possible, which is the reason we have left many things from upstream untouched in Endless OS 3.2 (e.g. the date/menu panel) and the reason why we already did some fairly big changes for 3.2 to get closer in other places we previously had our very own thing (e.g. folders), so be sure we will upstream everything we can as far as it’s possible and makes sense for upstream.

Actually, we have already pushed many patches to the shell and related projects since Endless moved to GNOME Shell a few years ago, and I don’t see any reason why that would change.

When will Endless OS desktop be rebased again on top of a newer GNOME Shell?

If anything we learned from this “rebasing” experience is that we don’t want to go through it ever again, seriously :-). It made sense to be based on an old shell for some time while we were prototyping and developing our desktop based on our research, user testing sessions and so on, but we now have a fairly mature system and the current plan is to move on from this situation where we had changes on top of a 4 years old codebase, to a point where we’ll keep closer to upstream, with more frequent rebases from now on.

Thus, the short answer to that question is that we plan to rebase the shell more frequently after this release, ideally two times a year so that we are never too far away from the latest GNOME Shell codebase.

And I think that’s all. I’ve already written too much, so if you excuse me I’ll get back to my Emacs (yes, I’m still using Emacs!) and let you enjoy this video of a recent development snapshot of Endless OS 3.2, as created by my colleague Michael Hall a few days ago:

(Feel free to visit our YouTube channel to check out for more videos like this one)

Also, quick shameless plug just to remind you that we have an Endless Community website which you can join and use to provide feedback, ask questions or simply to keep informed about Endless. And if real time communication is your thing, we’re also on IRC (#endless on Freenode) and Slack, so I very much encourage you to join us via any of these channels as well if you want.

Ah! And before I forget, just a quick note to mention that this year I’m going to GUADEC again after a big break (my last one was in Brno, in 2013) thanks to my company, which is sponsoring my attendance in several ways, so feel free to say “hi!” if you want to talk to me about Endless, the shell, life or anything else.

11 de June de 2017

Next Tracker is 2.0.0

There’s a few plans in the boiler for Tracker:

Splitting core from miners

Tracker is usually deemed a “metadata indexer”, although that’s just half the truth. Even though Tracker could be essentially considered that in its very early days, it made a bold move back in 0.7.x to using Sparql as the interface to store and retrieve this metadata, where both the indexers and the applications using this metadata talk the same language.

So in reality, the storage and query language are useful by themselves. As per the plans, you’ll now have to distinguish between:
– Tracker the RDF store, and
– Tracker miners, the infrastructure and binaries to translate a number of entities (be it files or whatever) into Sparql, using the first acceptation for storage

Making standalone Sparql endpoints possible

At the time of moving to Sparql, Tracker was conceived as a global store of deeply interconnected data, Nepomuk seemed the most obvious choice to represent the data for indexing purposes and client isolation was basically left to 3rd parties. However times change, sandboxing is very present, and Tracker’s global store don’t help with it.

Based on initial work from Philip Van Hoof, quite some shuffling has been going on in wip/carlosg/domain-ontologies to make multiple Sparql endpoints (“database connections” in non-fancy speak) possible. This will allow applications to use private sparql endpoints, and possibly using other ontologies (“schemas” in non-fancy speak) than Nepomuk. The benefits are twofold, this will be a lot friendlier to sandboxing, and also increments the versatility of the Tracker core.

Switching to Semantic versioning

This change was suggested some time ago by Philip Van Hoof, and the idea has been growing in me. Tracker has usually had a longstanding backwards compatibility promises, not just in terms of API, but also in terms of data compatibility. However, bumping the version as per the GNOME schedule results in multiple stable versions being used out there, with the patch backporting and bug management overhead that it implies.

In reality, what we want to tell people (and usually do!) is “use latest stable Tracker”, often the stuff is already fixed there, and there’s no reason why you would want to stick to a stable series that will receive limited improvements. I do hope that semantic versioning conveys Tracker’s “later is better” stance, with some optimism I see it standing on 2.y.z for a long time, and maybe even 2.0.z for tracker core.

But versions are like age, just a number :). Tracker 2.x services and client side API will foreseeably be backwards compatible with 1.x from the reader standpoint. The Tracker core could be made parallel installable with 1.0, but I wouldn’t even bother with the high level infrastructure, 2.x will just be a better 1.x.

But this doesn’t mean we jump the GNOME unstable cycle ship. IMHO, it’s still worth following to let newly introduced code bake in, it just won’t result in gratuitous version bumps if API changes/additions are not in sight.

Code and API cleanup

In the more active past, Tracker had a lot code accretion while trying to get the buy-in, this included multiple miners, extensions and tools. But it was never the goal of Tracker to be the alpha and omega of indexing in itself, rather to have applications update and blend the data for their own consumption. Fast forward a few years and the results are mixed, Tracker got an amount of success, although apps almost exclusively rely on data produced by Tracker’s own miners, while most of these extensions are bitrotting since much of the activity and manpower went away.

Sam Thursfield started doing some nice cleanups of maemo/meego specific code (yes, we still had that) and making Tracker use Meson (which indirectly started tiptoeing some of that bitrot code). Several of these these extensions implementing Tracker support shall just go to the attic and should be done proper, it will at least be the case with nautilus thanks to Alexandru Pandelea’s GSOC work :).

But the version bump and code split is too good of an opportunity to miss it :). Some deprecated/old API will also go away, probably none of which you’ve ever used, there will be some porting documentation anyway.

05 de June de 2017

Almería bid for hosting GUADEC in 2018

Well, «Alea iacta est». The deadline for biding to host GUADEC 2018 closed 4th Jun. And we proposed Almería.

GUADEC is the annual developers conference in the European zone, but with common attendants from Asia and North-South America, of the GNOME Project. This is another step to promote Almería as an international technology place and to estimulate new local techies, approaching students and proffesionals to world-class development communities and to the ethos and practice of opensource development. In the age of a «Github portfolio» as a CV resume for developers, the practice in opensource development it’s probably the best to train new programmers, to enhance their employability, and the learning of development practices along veteran programmers and mature communities. And it’s important to show how opensource software is a key component for our digitally connected society.

Again our legal instrument, absolutely key to these activities, is the UNIA CS students association at the University of Almería. The same association I helped fund in 1993 lives a new golden age thanks to a new group of entusiast members. Their support and collaboration made possible to host PyConES 2016, which, as far as we know, has been the biggest software development conference never made in our city. This conference has been a moral milestone for us and now we feel we can, at least, be hosts of other world-class conferences with the appropiate quality. And another key component of these proposals is the current university campus endownment and the support of the present university gobernm team to which we are grateful for their help and support. This is another step to increase the projection of the University of Almería as an international knowledge hub.

Finally I want to express my hope to do a significative contribution to the GNOME project, which I’m related to for more than a decade. Hopefully for 2018 I would have updated my pending GNOME packages in Fedora 🙈

So, here it is, the Almería candidacy to host GUADEC 2018.

22 de May de 2017

Online-pasahitzen segurtasun politikak. Oinarrizko gomendioak

Pasahitzen inguruan, askotan entzun dut kexu hau: “ezin ditut hainbeste pasahitz gogoratu!”. Jakin badakigu zein den horren ondorioa: webgune ezberdinetan pasahitza aukeratzerako unean, burua asko nekatu nahi ez eta pasahitz berdintsua aukeratzea ( superpasahitza, ikusietaikasi!, hegoakebakibanizion2002…). Agian gidoia, azpimarra edo zenbaki sinple bat erantsiz. Edo alderantziz idatzi. Bai, badakit noizbait hori egin duzula. Ez, azkenengo puntua ez, denak.

Enpin, beste estrategia aurreratuago bat maila ezberdinetako pasahitzak erabiltzea litzateke. Alegia, babestu nahi dudan hori oso garrantzitsua bada, pasahitz sendo bat aukeratuko dut: _?=superpasahitzasinkronikopatologikoa2014! . Ez bada hain garrantzitsua, zerbait errazago: -IkusiEtaIkasietaOndoEntzun-. Eta gainontzeko webgune guztietarako, pasahitz ziztrin bat: emoiztazumuxutxuemaitie 🙂

Horrela, pasahitz bat esku zikinetan eroriz gero, beno, ez da munduko ezer onik galtzen. Baina ondo pentsatu: zer gerta daiteke? zure izenean bidalitako mehatxuak, mezu -oso- iraingarriak zabaldu, pribatutasun falta… Eta hori guztia kontutan izan gabe batzuetan “maila altuko” pasahitza beste webgune garrantzitsu batetan erabili duzula, edo hainbat webgunetan.

Ez baduzu pasahitzak sortu eta kudeatzeko politika sendo bat, zure pasahitzak ez dira inolaz ere sendoak. Are gehiago, zure eta zure inguruko segurtasuna ahultzen ari zara pasahitz bat berrerabiltzen duzun guztietan. Eta bai, ziur asko zure pasahitza, jada, cracker-en eskuetan egongo da. Ez al duzu sinisten? Zuk-zeuk proba dezakezu hemen. 3752 miloi kontuen pasahitzak daude bertan. Edo kasurik hoberenean, hash-ak. Baina hash baten esanahia eta zergatian ez naiz orain sartuko, beste artikulu batentzako utziko dut. Demagun zuzenean pasahitzak direla. Zure kontua bertan agertzen al da?
Nirea bai, 14 ALDIZ, leku ezberdinetan. Eta ez da nire errua izan ez bakar batean. Nire kontua kudeatzen zuten enpresek ez zuten, ikusten denez, hain ondo kudeatu.

Orduan, zer egin? Alde batetik, ez errepikatu pasahitzak. INOIZ. Beste aldetik, pasahitz sendoak hautatu. Eta azkenik, pasahitza-kudeatzaile bat erabili.

Pasahitz sendoak erabili

Noski, pasahitz sendoak erabiltzeak ez dituzula errepikatuko suposatzen du alde batetik. Eta bestetik, asmatzeko errazak ez direnak aukeratuko dituzula: hainbat karaktere ezberdin, letra larri eta xeheak uztartuz, ikurrak eta zenbakiak erantsiz. Nahiko luzeak. Adibidez:


(16 karakterez osatutakoa). Nik asmatu dut txurro hori? Ez, webgunetik hartu dut. Ausaz aukeratzen ditu zuk esandako irizpideak betetzen dituen pasahitzak. Horrelako webgune asko daude, aukeratu nahi duzuna, edo zuk-zeuk asmatu pasahitzak.

Pasahitz-kudeatzaileak erabili

Aurreko pasahitza gogoratzea posible bada ere (nemonikoak erabiliz: fruit ” EGG 7 ` zip USA park skype KOREAN < TOKYO / rope XBOX = ) ez da erraza. Eta ezin baditugu errepikatu (EZ! Ezin ditugu errepikatu!). are gutxiago. Orduan zer? Pasahitz-kudeatzaile bat erabili. Asko daude merkatuan, online, lokalak, software jabeduna, software librea... Ez dizut burua nekatu nahi. Nik erabiltzen dudana gomendatuko dizut: KeepassX. Software librea da (GPL), plataforma anitzetan dabil (Win, OSX, Linux). Erraz ikasiko duzu erabiltzen baina hona hemen argibide batzuk.

KeepassX aplikazioak bi fitxategi nagusi erabiltzen ditu: datubasea eta gakoa. Datubasean zure erabiltzaile/pasahitz/webgune hirukote guztiak gordeko ditu. Datubasea zifratu egingo du (inoiz galdu egiten baduzu, ez da ezer gertatuko, datuak zifratuak baitaude). Zifratzeko AES algoritmoa erabiltzen du (Advanced Encryption Standard, AES). Algoritmo hori simetrikoa da, alegia, pasahitz bat behar du zifratzeko eta pasahitz bera deszifratzeko. Segurtasunaren aldetik oso algoritmo sendoa da, lasai erabili dezakegu.
Beraz, KeepasX erabiltzen dugun guztietan gure pasahitzen datubasea deszifratzeko pasahitz nagusia eskatuko digu. Hori da, pasahitz sendo bat buruz ikasi eta gogoratu beharko duzu. Baina soilik bat.

Beste segurtasun neurri gisa gako fitxategi bat eskatu ahal dizu (defektuz horrela egiten du). Beraz, 2FA (Two Factor Authentication) edo bi faktoreetan oinarritutako kautotze prozesu bat erabiltzen duela esan dezakegu: alde batetik zuk dakizun zerbait behar du (pasahitza) eta bestetik zuk duzun zerbait (key edo gako fitxategia).

KeepassX aplikazio lokala da. Eta jada entzun ditut zure kexuak: “nik hainbat ordenagailuetan egiten dut lan, nola kudeatuko dut dena modu lokalean egiten badu lan?” Trikimailu bat erabiliko dugu hemen: KeepasX-ek behar dituen fitxategiak (datubasea edoeta key fitxategia) Dropboxen utzi. Gogoratu, zifratuta daude. Edo segurtasun neurri sendoago bat jarraituz: datubasea Dropboxen utzi eta key fitxategia USB batean (poltsikoan edo motxilan eramaten duzun USB horretan 😉

Esandakoa, erabili nahi duzun aplikazioa eta segurtasuna politika, baina mesedez, ez berrerabili pasahitzak.

PRO-TIP: KeepassX erabiltzea aukeratuz gero, ez dituzu online pasahitz sortzailerik erabili behar. Aplikazioak berak eskainiko dizkizu ausaz sortutako pasahitz sendoak.

20 de May de 2017

Frogr 1.3 released

Quick post to let you know that I just released frogr 1.3.

This is mostly a small update to incorporate a bunch of updates in translations, a few changes aimed at improving the flatpak version of it (the desktop icon has been broken for a while until a few weeks ago) and to remove some deprecated calls in recent versions of GTK+.

Ah! I’ve also officially dropped support for OS X via gtk-osx, as I was systematically failing to update and use (I only use frogr from GNOME these days) since a loooong time ago,  and so it did not make sense for me to keep pretending that the mac version is something that is usable and maintained anymore.

As usual, you can go to the main website for extra information on how to get frogr and/or how to contribute to it. Any feedback or help is more than welcome!


03 de May de 2017

WebKitGTK+ remote debugging in 2.18

WebKitGTK+ has supported remote debugging for a long time. The current implementation uses WebSockets for the communication between the local browser (the debugger) and the remote browser (the debug target or debuggable). This implementation was very simple and, in theory, you could use any web browser as the debugger because all inspector code was served by the WebSockets. I said in theory because in the practice this was not always so easy, since the inspector code uses newer JavaScript features that are not implemented in other browsers yet. The other major issue of this approach was that the communication between debugger and target was not bi-directional, so the target browser couldn’t notify the debugger about changes (like a new tab open, navigation or that is going to be closed).

Apple abandoned the WebSockets approach a long time ago and implemented its own remote inspector, using XPC for the communication between debugger and target. They also moved the remote inspector handling to JavaScriptCore making it available to debug JavaScript applications without a WebView too. In addition, the remote inspector is also used by Apple to implement WebDriver. We think that this approach has a lot more advantages than disadvantages compared to the WebSockets solution, so we have been working on making it possible to use this new remote inspector in the GTK+ port too. After some refactorings to the code to separate the cross-platform implementation from the Apple one, we could add our implementation on top of that. This implementation is already available in WebKitGTK+ 2.17.1, the first unstable release of this cycle.

From the user point of view there aren’t many differences, with the WebSockets we launched the target browser this way:


This hasn’t changed with the new remote inspector. To start debugging we opened any browser and loaded

With the new remote inspector we have to use any WebKitGTK+ based browser and load


As you have already noticed, it’s no longer possible to use any web browser, you need to use a recent enough WebKitGTK+ based browser as the debugger. This is because of the way the new remote inspector works. It requires a frontend implementation that knows how to communicate with the targets. In the case of Apple that frontend implementation is Safari itself, which has a menu with the list of remote debuggable targets. In WebKitGTK+ we didn’t want to force using a particular web browser as debugger, so the frontend is implemented as a builtin custom protocol of WebKitGTK+. So, loading inspector:// URLs in any WebKitGTK+ WebView will show the remote inspector page with the list of debuggable targets.

It looks quite similar to what we had, just a list of debuggable targets, but there are a few differences:

  • A new debugger window is opened when inspector button is clicked instead of reusing the same web view. Clicking on inspect again just brings the window to the front.
  • The debugger window loads faster, because the inspector code is not served by HTTP, but locally loaded like the normal local inspector.
  • The target list page is updated automatically, without having to manually reload it when a target is added, removed or modified.
  • The debugger window is automatically closed when the target web view is closed or crashed.

How does the new remote inspector work?

The web browser checks the presence of WEBKIT_INSPECTOR_SERVER environment variable at start up, the same way it was done with the WebSockets. If present, the RemoteInspectorServer is started in the UI process running a DBus service listening in the IP and port provided. The environment variable is propagated to the child web processes, that create a RemoteInspector object and connect to the RemoteInspectorServer. There’s one RemoteInspector per web process, and one debuggable target per WebView. Every RemoteInspector maintains a list of debuggable targets that is sent to the RemoteInspector server when a new target is added, removed or modified, or when explicitly requested by the RemoteInspectorServer.
When the debugger browser loads an inspector:// URL, a RemoteInspectorClient is created. The RemoteInspectorClient connects to the RemoteInspectorServer using the IP and port of the inspector:// URL and asks for the list of targets that is used by the custom protocol handler to create the web page. The RemoteInspectorServer works as a router, forwarding messages between RemoteInspector and RemoteInspectorClient objects.

23 de April de 2017

De vuelta a Linux: Ubuntu GNOME en MacMini y MacBook Pro

Después de mucho postergarlo, finalmente decidí dejar Mac OSX y volver a Linux.  Son varios los motivos y puedo decir tranquilamente que OSX no es para nada un mal sistema, de hecho para un usuario como yo es una excelente combinación de la potencia y flexibilidad de Unix con la disponibilidad de aplicaciones mainstream nativas en el sistema.

Pero yo quería otra cosa.

Ubuntu GNOME y Eclipse

Ubuntu GNOME y Eclipse

Con Linux me había acostumbrado a poder modificar lo que yo quisiera del sistema. Usándolo todos los días tiendo a aburrirme y en OSX como mucho podía cambiar el fondo de pantalla y el color de la barra superior, del resto prácticamente nada.

Por otro lado, para el tipo de uso que le doy al computador las herramientas en Linux están mucho más a la mano, en OSX están a través de brew o macports pero siempre son ciudadanos de segunda clase. Ni hablar de tratar de compilar PHP para que use SQL Server, Oracle y cosas por el estilo. Incluso algo tan simple en Linux como poder escribir unidades NTFS puede volverse un infierno en OSX.

En fin, al momento en que quise cambiar el look & feel de OSX para que fuera obscuro y así descansar más la vista y no pude, y al mismo tiempo el anuncio de Canonical de abandonar Unity fue el empujón que necesitaba para dar el paso. Ah! Y ya que estaba desconectado del desarrollo de GNOME por mucho tiempo, este review de Ubuntu GNOME me entusiasmó mucho más.

Y aquí estoy, escribiendo desde Ubuntu GNOME en mi computador principal.  Primero hice unas pruebas en mi portátil, cuya configuración la puedo armar en cualquier momento desde cero. Lo usé unos días y me convenció completamente, todo el hardware fue soportado sin hacer nada especial, incluso unos audífonos bluetooth que no funcionan en OSX sí funcionaron en Linux. Para qué decir del software, fue como sentirme de vuelta en casa con el añadido de que GNOME es quizás el mejor sistema de escritorio que he usado.  Ojo, antes que los Apple fan se me tiren encima, si no lo han probado no tienen como opinar. Sólo al usarlo te das cuenta de que en GNOME han hecho un excelente trabajo.


Antes de que me digan “ah no, es que yo uso la aplicación X” vamos a ser claros, cada uno usa el sistema que más le acomode y eso depende mucho de las aplicaciones que uno necesite para su trabajo diario. En mi caso tanto OSX como Linux me sirven, por lo que la elección de uno u otro sistema corresponde a otros factores, como los descritos arriba.

Para entender el caso, esto es lo que uso frecuentemente: Java SDK, Android SDK, Android NDK, Eclipse, GNU tools (build tools, bash, etc), MySQL PHP, un navegador, Dropbox, GIMP. En menor medida: Utilidad para analizar el uso del disco, monitores de sistema (temperatura, uso de recursos), etc.  Como pueden ver, todas estas herramientas están disponibles en ambos sistemas operativos, nativamente en Linux y a través de diversos mecanismos en OSX.

Seguramente hay algunos que quieran hacer la prueba o solucionar algún problema o duda respecto a instalar Linux en hardware de Apple, así que en el resto del artículo dejaré documentado lo que he ido ajustando en el sistema.


Para no llenar de imágenes este post, a través de links dejaré screenshots de referencia. La instalación inicial se resume en los siguientes pasos:

Una vez instalado el sistema, se reiniciará el equipo y partirá con Linux.  Si quieren partir con OSX, usen nuevamente la combinación CMD+X a menos que les aparezca el menú de rEFInd. (A mi a veces me ha aparecido, a veces no).

Temperatura, ventiladores y uso de CPU

Con un sistema recién instalado lo primero que notarán es que el equipo se calienta. Eso es porque falta instalar una utilidad que controle el ventilador. Como preferencia personal a mi me gusta ver el uso de CPU y temperatura en el panel, así que vamos a instalar todo de una.  En un terminal:

sudo apt-get install lm-sensors cpufrequtils macfanctld tlp

Se trata de:

  • lm-sensors: permite obtener información de temperatura y velocidad de rotación de los ventiladores
  • cpufrequtils: permite ajustar la forma en que la CPU cambia de velocidad. La idea es que sólo use una alta velocidad sólo cuando sea necesario
  • macfanctld: con la información de temperatura, esta utilidad controla automáticamente la potencia de los ventiladores. Si la temperatura sube, aumenta la potencia de los ventiladores, y al bajar, reduce la potencia.
  • tlp: Se encarga de aplicar ajustes para reducir el uso de batería en portátiles

No se preocupen que en ningún caso usarán estas herramientas directamente, a menos que quieran modificar su comportamiento.  Lo normal es que instalen alguna aplicación de escritorio que usará estas herramientas para controlar el sistema.  En mi caso instalé extensiones de GNOME shell que entregan información de uso en la barra superior y permiten realizar ajustes del sistema en forma gráfica.  Mis elegidas fueron:


CPUFreq GNOME Extension

Al instalar estas aplicaciones pude entender un problema que siempre tuve con OSX en mi MacMini, y es que el equipo se calienta demasiado, al punto en que la tarjeta WiFi comenzaba a fallar. El MacMini en general es MUY silencioso a menos que esté trabajando en forma intensiva, y esto es simplemente porque el ventilador no comienza a funcionar sino hasta que la temperatura es muy alta.  Por lo tanto, en general el equipo andaba con alta temperatura pero en silencio.  Al instalar macfanctld lo primero que llama la atención es que el ventilador parece estar andando siempre, pero es simplemente porque la configuración de origen está hecha para mantener el sistema andando a temperaturas razonables, y para eso tiene que usar constantemente el ventilador.

Por lo tanto queda la opción de a) alta temperatura y silencio o b) baja temperatura y ventilador andando.  Como ahora estamos hablando de Linux, basta modificar el archivo de configuración de macfanctld para ajustarlo como uno quiera.  Se puede definir la velocidad de rotación mínima y dos temperaturas: La temperatura mínima en donde el ventilador estará en su potencia mínima definida, y la temperatura máxima en donde el ventilador funcionará a toda su potencia.


Freon GNOME Extension

Sin tener datos exactos, pero recordando cómo funcionaba esto en OSX podría estimar los valores que estaba usando en mi equipo. Si quisiera resumir todos los valores tenemos:

  • OSX en MacMini: 1500RPM, min 80º, max 90º (estimado)
  • Ajustes originales de macfanctld: 2000RPM, min 45º, max 55º (macfanctld.conf)
  • Mis ajustes de macfanctld: 1800RPM, min 60º, max 70º (personalizado)
  • Config actual un poco más tibia pero más silenciosa: 1800 RPM, avg 70º – 80º, periferia 50º – 68º

Ahora el ventilador se mantiene más activo, pero ya no me quemo al tocar el macmini.

Para los que tengan este problema y estén usando OSX, entiendo que hay aplicaciones que permiten ajustar estos valores también, sólo que yo no supe de eso hasta que instalé macfanctld.

Dark theme

Lo que gatilló el cambio con fuerza fue contar con un escritorio obscuro para descansar la vista, y Ubuntu GNOME viene preparado para hacer ese cambio de una forma muy sencilla. Simplemente deben abrir la Herramienta de retoques de GNOME Shell y en Apariencia activar Tema obscuro global y luego en Tema -> GTK+ poner Adwaita-dark.

GNOME Dark settings

GNOME Dark settings

Java y Eclipse

Si bien Ubuntu incluye Eclipse, es una versión relativamente antigua.  Personalmente prefiero instalar la última versión de Eclipse desde el sitio oficial e instalar JDK 8 de Oracle.

Primero deben instalar JDK8 con los siguientes pasos gracias a WebUpd8:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

Para instalar Elipse, descargan el instalador, lo descomprimen y ejecutan. Presentará varias opciones de instalación, yo seleccioné Eclipse for Java Developers.

Ajustes adicionales de Eclipse

Eclipse a secas no tiene todo lo que suelo ocupar, asi que en Ayuda -> Eclipse Marketplace siempre instalo las mismas extensiones a las que ahora se agregan aquellas para obtener un aspecto obscuro.  Una funcionalidad no muy conocida del Marketplace es que pueden marcar extensiones como favoritas, para que sea más fácil instalarlas en un Eclipse nuevo, simplemente abren los favoritos y ponen “Instalar todo”.

Mis extensiones favoritas de Eclipse son:

  • Eclipse C/C++ IDE (CDT)
  • PHP Development Tools (PDT)
  • Android Development Tools (ADT)
  • Subclipse
  • Eclipse Mooonrise UI Theme
  • Eclipse Color Theme
  • Eclipse Data Tools Platform (DTP)

Las últimas dos ayudan a darle el aspecto obscuro. Una vez instaladas van a Preferencias -> Apariencia -> Tema y seleccionan Moonrise Standalone.  Luego en Preferencias -> Apariencia -> Colores seleccionan el que sea de su agrado. Yo estoy usando Gedit Original Oblivion.

Eclipse en Ubuntu GNOME

Eclipse en Ubuntu GNOME


Para no duplicar mi biblioteca de música, importé los archivos directamente desde mi antigua librería de iTunes.  Inicialmente podía ver la partición de OSX en Archivos -> Otras ubicaciones, pero la carpeta /Users/fcatrin/Music no tenía permisos de lectura. Para solucionarlo inicié en modo rescate de OSX (CMD+R), entré a mi carpeta personal via terminal y apliqué:

chmod 755 Music

Luego reinicié y agregué la carpeta en Rhythmbox

Biblioteca iTunes en Rythmbox

Biblioteca iTunes en Rhythmbox

De origen no viene incluido un ecualizador, pero hay uno disponible a través de plugins. Acá pueden encontrar ese y otros bastante interesantes: Installing rhythmbox 3.0 plugins … the easy way!


Para Android un par de problemas al intentar levantar un emulador:

Primero no me dejaba crear una máquina virtual (AVD), fallaba al crear la tarjeta SD.  Eso fue porque la utilidad que crea la tarjeta SD es de 32 bits y requiere bibliotecas de 32 bit que no son instaladas como parte del SDK. La solución es sencilla:

sudo apt-get install lib32stdc++6

El segundo problema fue la aceleración de video que no queda lista para llegar y usar.  Se requiere instalar glxinfo y actualizar una biblioteca del SDK copiando la que ya tienen en el sistema.  Se reduce a:

sudo apt-get install mesa-utils

cd ~/android-sdks/tools/lib64/libstdc++/
rm libstdc++*
ln -s /usr/lib/x86_64-linux-gnu/ .

Con eso quedará instalado glxinfo, y actualizada libstdc++ apuntando a la que está instalada en tu sistema.

Otros ajustes

Hay otros ajustes que se pueden hacer al sistema para que quede más sintonizado con los sitios web existentes, el idioma local, entre otros. En esta lista están:

  • Instalar los paquetes de corrección ortográfica al español
  • Instalar las fuentes de Microsoft
  • Instalar fuentes adicionales
  • Instalar Dropbox.
  • Ajustar campos de texto en Firefox

Para el primero y segundo basta con:

sudo apt-get install ttf-mscorefonts-installer aspell-es myspell-es

Para el tercero, descargar y abrir los archivos TTF incluidos. Se abrirán con el instalador de fuentes.

Para Dropbox pueden ir al instalador de aplicaciones Software que se encuentra en los iconos de la derecha y buscar por Dropbox. El sistema descargará e instalará Dropbox automáticamente.

Dropbox muestra un ícono de actividad en la barra de notificaciones que no existe como tal en GNOME, pero para variar, hay una extensión que la habilita, se llama TopIcons Plus.

En cuanto a los idiomas, usualmente escribo en inglés y en español indistintamente, lamentablemente Firefox sólo permite usar un idioma a la vez. Hay una forma de unir los archivos de corrección ortográfica como un solo idioma pero no lo he hecho aún.

En Evolution fueron más flexibles y se puede configurar más de un idioma al mismo tiempo.

Al usar un tema obscuro en Firefox se puede tener problemas con los campos de texto, ya que a veces los sitios modifican sólo el color defondo o sólo el color del texto y quedan invisibles porque asumen que el fondo es blanco.  Acá hay varias opciones para solucionar el problema de campos de texto en theme obscuro, personalmente me quedé con agregar un archivo userContent.css.

Creo que eso es todo por ahora, seguramente iré agregando más detalles en este post. Espero quedarme con Linux por un buen tiempo.

Finalmente: Tal como lo recordaba, los fonts en Linux se ven mucho más suaves y definidos que en OSX.

21 de April de 2017

Loa: Recuerdos del pasado

Después de mi post sobre Programando en los ’90 aparecieron varios sansanos con más recuerdos de esa época, y me acordé de una joyita que tenía guardada hace años y que al parecer ya desapareció de la red.  Les dejo acá la transcripción integra:

From utfsm!not-for-mail Mon Jul 28 18:30:21 1997
Path: utfsm!not-for-mail
From: (Eduardo Romero U.)
Newsgroups: usm.general,usm.cachureo
Subject: Loa: Redescuerdos del Pasado
Date: 11 Jul 1997 20:21:18 GMT
Organization: Utfsm Amateur Radio Group.
Lines: 201
Distribution: local
Message-ID: <5q64ju$sj3$>
X-Newsreader: TIN [UNIX 1.3 BETA-950824-color PL0]
Xref: utfsm usm.general:878 usm.cachureo:859

Para mas estrugarse las lagrimas, un recuerdo de los annos mozos del VM.
From:         Francisco Javier Fernandez M <FFERNAND@LOA.DISCA.UTFSM.CL>
Subject:      Curso de redes

    Se han dado cuenta de los mensajes del curso de redes. Hay que hacer
notar que han evolucionado.
    Quien no recordara esos estupidos mensajes como

subject: nada
"Esto es una prueba... chao"

subject: (vacio, porque la inteligencia no les da pa saber que es subj)
"Hola, soy nuevo y me llamo XXX estoy en ing. civil en xxx .. chao amigos"

subject: REDES

    Notese la estupidez cronica mostrada en los mensajes. Es como cuando
un idiota se para frente a un microfono, pone cara de burro mascando
limones , sopla 3 veces (ni 2 ni 4, tiene que ser 3) y dice "1 2 3
probando probando"
    Ademas andan con sus libritos de redes pa' toas partes.

weon:    "De donde vienes?"
aweonao: "Del curso de Redes Internacionales" Lease con voz
          de cuico y cara de imbecil
weon:    "Ahhhh" (cara de idiota al cubo) "Y que es esa wea?"
Aweonao: "La verdad es que no te lo podria decir en pocas palabras
          pero es una wea mas chora que la cresta. Ademas me van a dar
          un diploma que me va a servir para mi curriculum"
weon:    "Shuta! que capo eres. Me podrias invitar a la sala IBM pa ver
          como es la weaita" (cara de tarupido {tarado+estupido})
aweonao: "Pero no faltaba mas viejo, por supuesto"

     Aweonao lleva a weon a los VM. Aweonao no pesca su cuenta I5bit0xx
o redintxx pero weon se mete todos los dias a los tarros, se hace de
n amigos, se hecha el semestre y se cambia a ejecucion.

    Y esta historia es ciclica. Llueven los aweonaos y los weones.
Pero no todos son iguales. Tambien hay aweonaos que no se sacan el 55
en la prueba y no les dan cuenta .. (o no J...s C....y ?)

Tambien hay weones que le cambian la password, emocionadisimos por
usar el nuevo comando DIRM PW y despues se les olvida y se cuelgan
del timbre para preguntarsela al op.

Tambien estan los que usan el Chat para conversar y le mandan
mensajes weones al operador y mandan un CMD UCHCECVM Q SYS al relay...

Ademas, revisando unas consolitas, hemos visto que hay weones
que mandan el tipico "Hola, me llamo xxx. Estas ocupado(a)?"
a maquinas como MAILER, TCPIP o VTAM.

Primera fase: El prehistorismo vmniano
      Se caracterica por ejecutar comandos como DIR, no sacarle
el pito a las teclitas, usar el contraste y brillo a maximo para
cagarse los ojos, buscarle como loco la disketera al terminal y
preguntar "Pa que es esa tapita?" ... "Y esta perillita?"

Segunda fase: Edad media
        Wow! nuestro amigo descubre que la perillita es para el
volumen del beep. Aqui se pasa horas y horas weiando con el
pitito. Nuestro habil amigo descubre el DISCADOC (y se entretiene
mas encima ! como sera de weon). Busca algun nodo de estados unidos
y se pone a mandar CPQ N como loco. Busca su victima y empieza con
esos tipicos "Hello, are you busy?" (notese ... weas ya es bilingue)
     En esta etapa ya empieza a traer amigos para que vean lo pulento
que es, y a la polola tambien a ver si se exita con la wea.
       Pero luego ... Oh! descubre el relay

Tercera fase: Como hacha pa'l relay:
       Esta fase es cronica. Empieza a divagar por el relay y no lo
saca nadie del terminal. Manda fotos pa' EEUU y le llegan postales.
Su satisfaccion es inmensa, y llega a un cuasi-orgasmo cuando
hay linea por la uchcecvx, por que sabe que podra jugar con el relay.
       Nuestro amigo ya deja el discadoc. Pero recuerda que en su
casa tiene un sinclair Z80. Aqui empiezan los dialogos a consultoria.

weon: Tell consulte Necesito ayuda
Imsg: Consulte not in cp directory
weon: Tell consulta Ahh! entiendo, estas ocupado, verdad?
Imsg: Consulta not loged on
weon: Tell consulta Oye! no tienes porque decirme weon!
.... y se repite el ciclo

       Por fin se da cuenta de lo que hacia y se caga de la risa.
Lo primero que hace es contarsela a otro como que "A un amigo le paso"
       Computin descubre el listserv. WOW! que impresionante!
De nuevo a la carga con el DISCADOC (fascinado). Se mete a
20 o 30 listas. El reader se le llena y en vez de conversar por relay
se la pasa todo el dia leyendo correo.

Weon: "Esto se acabo!"
weon: pur rdr all
weon: tell listserv signoff * (netwide

Cuarta fase: Weas descubre internet
      Si Bitnet era la raja, esto la cago! Lo primero es el irc.
El irc es fundamental, ya que es la primera vez que usa el comando
HX, que sera el mas usado en su estadia en los VM. Porque se
queda colgado 20 veces al dia. Aqui conoce el tipico.:

op:   SI?
weon: Buenos dias. Soy xxx de la cuenta xxx y soy tan idiota que se
      ma quedo pegada la cuenta. Le puede hacer un force , plis?
op:   Okay.  BEEEEEEEP (que pito mas desagradable)

       Luego entra al fascinante mundo de FTP. Se pasa horas cagandose
los ojos frente al SIMTEL LISTING. Wevea y wevea hasta que trae
todos los archivos que se le antojan. (en este momento aprende que es
un disco temporal y que con un disco de 10 Mb te llega un warning
del op).
     Y llegamos al telnet! aqui empieza lo bueno. A mister weveta
le dan todos los ataques de hacker y hace un telnet al primer host
que pilla y sucede lo siguente

Dick SUN/OS SYSTEM INC. vers 1.3 release 4.7
login: root
password: root    (wow! notese la habilidad)
login incorrect

login: root
password: sun/os. (y jura que se va a meter!)
login incorrect

login: root
password: pico   (aqui le sale el chilenismo)
login incorrect

Connection closed by foreign host

       Si ustedes creen que se chorio, estan equivocados. Aprieta
PF1 y sigue dandole.
       Luego siguen estupideces varias como gopher, archie, whois, etc.

Quinta fase: Weas programador
       Weitas se aburre de ser un simple usuario, y motivado con
modificar 500 veces el profile , quiere aprender rexx.
       Aqui conoce a Mitzio. Un hito en programacion, hackeo, weveo,
replys y derecho de autor de programas y manuales.
       Aprende un poco de rexx y el primer programa que hace no le
weas:   "Mitzio! No funciona!"
Mitzio: "Mmmm pero si es re facil. te falta solo un strip(arg*var(parse
         (a-2||fr"cp log"Execio34) exit(0):
weas:   "Ahh . como no se me ocurrio antes"
.... y asi sucesivamente.

Sexta fase: VM? buah!
         Esta fase es crucial. Aqui weas tiene que decidir si se queda
en los vm o se mete a otras plataformas.
         Va a los Ps y se encuentra con la paternal figura de C. Libano:
C.L.:  "Usted es mechon"
weas:  "Si, de ing. ejec en xxx"  (con una cara de orgullo tremenda)
weas:  "Pero .."
weas:  "Okay. Salvo este archivo y me voy"
C.L.:  "Fuera... AHORA!"

         Y se va ...
         Pero baja un piso y se mete a informatica. "Chucha los
terminales ricos" (refiriendose a los wyse) . Y no resiste a
meter el nombre de su cuenta en el LOGIN: que lo llama a
probar. Pero. se da cuenta que no tiene cuenta abajo. Oh! desilucion
         Pregunta a un gallo con cara de esperar que compile su
mega programa en C.
weas:  "oye, yo tengo cuenta en los ibm, no sirven aca"
gallo: "Ja! ja! Esto es UNIX compadre, ubicate"
weas:  "Que e' esa wa?"
gallo: "Como?, no sabes? El sistema operativo del futuro, viejo"
weas:  "Ahh, fijate que yo se Msdos y VM/SP"
gallo: "Ah! de veras que eso se usa alla arriba, en Jurasic Park"
      Y weas mira la pantalla y ve que para un misero directorio
el gallo hace un ls -la & (obviamente el & tiene que ir. Cuando
se ha visto un informatico que no deje procesos en backgroud
pa puro weiar)
       Weas se trauma. Dice que esto no es para el.
       Y sigue en su cuenta de Vm. Seguiran los FTP, los
Telnets, las conversaciones de relay...
       Un dia llega a su terminal y ve que redintxx no esta en
el directorio de cp. Cae en un profundo trauma y muere.

P.d.: Otras innovaciones de weones son: Los que se meten al circulo
      de radioaficionados y ocupan la CE2USM. Los que se meten al
      encuentro de estudiantes y ocupan la EIEI. Idem con la IEEE
      Tambien algunos se consiguen la cuenta de un profe, el profe
      se va de la USM y siguen con cuenta per secula seculorum
     (o no J...o Z....a ?)

Sin mas que agregar



Programando en los ’90

Programar siempre me ha sido entretenido pero difícil. No me malentiendan, puedo lograr buenos resultados pero eso no implica que sea fácil, sólo que soy lo suficientemente porfiado para seguir adelante hasta lograr lo que quiero hacer. El poder convertir en real algo que antes sólo existía en la imaginación es una de mis principales motivaciones, y si esto plantea algún desafío técnico interesante, mejor aún.

Creo que uno de los motivos por los que ya no me entusiasma por ejemplo entrar a la programación de videojuegos, es que ahora tienes libertad para hacer cualquier cosa, y básicamente lo que se trabaja es el gameplay y el contenido. A mi en realidad lo que me llamaba la atención era el desafío técnico, como por ejemplo tratar de hacer un port de un juego de arcade al Atari (800): Cómo usar adecuadamente los 4 colores por línea, o los cuatro canales de sonido, o sacar el jugo a su CPU de 1.79Mhz con apenas 3 registros.  Recuerdo que podía pasar una tarde entera trabajando en una “rutina” de assembler en un cuaderno hasta llegar a su mejor versión.  Era una época en que una simple optimización podía significar que algo fuera factible o no.


2017-04-20 22.06.15

Reproductor de música FSTR


Bueno, eso era en los ’80 y reservaré ese relato para un próximo articulo de Prince of Persia para Atari.  Ahora vamos a ir “no tan lejos” pero sí a una época que ya no existe y que creo que sería difícil imaginar para quien no la vivió, y para los que la vivimos se nos ha ido olvidando con el tiempo.  Ahora es tan simple aprender algo que no nos acordamos de cómo se hacían las cosas cuando no existía stackoverflow, no había a quién preguntar y peor aún, no había documentación de ningún tipo.

Internet a inicios de los ’90

La primera mitad de los ’90 es una época que recuerdo con mucho cariño, se comenzaba a hacer accesible tener computadores más potentes y comenzaba a haber algo de información disponible para aprender a programarlos, por ejemplo en la biblioteca de la UTFSM podías encontrar algunos libros de Intel sobre la programación x86 y lo más importante, algo de código fuente para mirar en algunos rincones de Internet.


Intel 286 programming

iAPX 286


A ojos de hoy eso suena muy natural, pero era muy diferente a lo que tenemos hoy en día.  De partida el acceso a Internet no existía en las casas, sólo en las  universidades,  y siendo mechón como yo… ni siquiera podías usar internet en la universidad.  Entonces, no era llegar y escribir en un buscador lo que andabas buscando, no señor! No había tal buscador, ni siquiera tenías un navegador, la información estaba principalmente en servicios de grupos de noticias (usenet / newsgroups) – imaginen algo como un foro universal de internet de sólo texto – y sitios de descarga por ftp que se usaba por linea de comandos.  Para buscar, existía una aplicación llamada Archie que buscaba archivos, entonces sólo si el sitio contaba con un índice decente, podías llegar a encontrar algo.



Newsgroups en Usenet (via TIN)


Ahora, como Internet no era tan grande como en la actualidad, los sitios que tenían información sobre programación de PC eran pocos y por lo tanto muy conocidos. Había un repositorio en especial que estaba replicado en varios servidores de FTP, se llamaba Simtel y era un paraíso.  No sólo había código fuente para mirar, sino que también documentación! Todo eso disponible al alcance de un comando GET.  El problema era que no podías llegar a ese punto si el acceso a esos sitios estaba bloqueado o peor aun, eras mechón como yo.

Siendo mechón en la UTFSM a inicios de los ’90

Y es aquí donde creo que por primera vez hablaré en público de las historias no contadas de mis tiempos en la UTFSM, cuando sabías que el conocimiento estaba disponible, sólo que no lo podías alcanzar. Esto era más o menos entre 1993 y 1994, y la UTFSM pagaba por tráfico internacional por lo que éste estaba permitido sólo para alumnos que tuvieran ramos o actividades que lo justificaran. El acceso para el resto de los alumnos era sólo a las réplicas nacionales en donde estaban los newsgroups (saludos a chile.comp.pc), pero no estaba Simtel.  Entonces algo había que hacer.

Aparte de no tener acceso a Simtel por estar en redes internacionales (que divertido suena eso ahora), estaba el inconveniente de que si llegabas a conseguir algo de información, sólo te podías llevar una copia en la mente, porque no podías pasar esa información a diskettes – ni hablar de pendrives, aun faltaban varios años para que siquiera inventaran el puerto USB. El único acceso era un terminal de texto monocromo en donde al menos podías descomprimir y leer los archivos in situ. Dado el estado de la tecnología todo era archivo de texto, incluso las revistas, así que bastaba con eso.  Aún así, persistía el problema de llegar a Simtel.



Terminal de texto Wyse 50


Los laboratorios de “investigación”

Y es aquí en donde aparecen los amigos de siempre: Luis Cueto, Max Celedón, Cesar Hernández.  Ellos por estar en cursos superiores tenían otros amigos que tenían acceso a otros laboratorios, en donde se hacía “investigación” y por lo tanto sí tenían acceso a los nodos internacionales.  Entonces, desde ahí se podía llegar a Simtel, pero tampoco había forma de llevarse archivos a la casa para estudiarlos con calma, ya que eran terminales de IBM conectados a un mainframe, al que sólo tenía acceso un grupo selecto de operadores – capa blanca incluida – a los que tenías que llamar con un timbre para que te mataran un proceso si éste se caía… cosa que ocurría habitualmente dado el número de veces que se escuchaba sonar el timbre famoso.



IBM: Nunca los vi, pero es para hacerse una idea


Y es aquí donde entra a jugar la audacia de Max. La maquinaria de descarga de información funcionaba de la siguiente forma:  Max, Luis o César se conseguían una cuenta de IBM con algún amigo “investigador”, eso nos permitía usar ese laboratorio para llegar a Simtel siempre y cuando estuvieras dispuesto a suplantar presencialmente al verdadero dueño de esa cuenta. Yo no era de los valientes puesto que era muy joven para darme cuenta de que los usuarios de ese laboratorio no se caracterizaban por sus habilidades sociales, por lo que difícilmente te preguntarían el nombre. Esas cuentas iban muriendo pero Max siempre aparecía con una nueva, cuentas con nombres como i5elo200 o i5esp101 eran algunas de las regalones, e incluso Max tuvo una propia más tarde, i5mceled.


Captura de pantalla 2017-04-20 22.28.05

Inicios en la comunidad MSX con cuenta de “alguien”


Una vez encontrados y descargados los archivos en ese laboratorio IBM, tenías que pasarlo a tu cuenta Unix en el laboratorio que no tenía acceso a los nodos internacionales, pero sí tenían disketteras. Transferir los archivos era fácil porque esos laboratorios estaban conectados, y mientras no revisaran los logs del servidor ftp de Unix no habría problema.  Una vez con los archivos en la cuenta, venía el paso final que era pasar esos archivos a diskette.

Pidiendo favores a los verdugos

El último paso era pasar los archivos a diskette, y el más complicado porque tenías que pedírselo a una persona. Y no era a cualquier persona. En una zona especial del laboratorio Unix (labsw para los amigos), había un grupo de unas 4 personas que estaban tras un vidrio, intocables, omnipotentes, omniscientes. Esa división no era antojadiza, eran los únicos con acceso a todos los nodos de internet y a las estaciones de trabajo Unix de la época. Mientras los alumnos regulares como uno usaban un terminal de texto, ellos usaban equipos con 128MB de RAM, pantallas de 21 pulgadas, discos duros gigantes y mouse con puntero láser, cuando lo normal en las casas era tener 1MB de RAM, pantalla de 14 pulgadas y mouse con bolita.  Sí señor, ellos eran unos elegidos – literalmente porque había que postular al cargo – y tenían el poder de cerrar tu cuenta en cualquier momento (omnipotente) y saber todo lo que estabas haciendo (omnisciente).


Sun SPARCstation


Sí, a ellos tenías que pedir que te copiaran a diskette los archivos que obtuviste desde nodos prohibidos, usando cuentas que no eran tuyas, suplantando a personas que ni siquiera conocías. Y algunos de ellos eran famosos por ser de malas pulgas, con varias víctimas a quienes se les cerró la cuenta por mucho menos. Siempre recordaremos con cariño al famoso Arcadia, de quien no daremos detalles para proteger su verdadera identidad.

Pero esto no era problema para Max, a su modo de ver las cosas, bastaba con decir que necesitabas copiar una tarea de tu cuenta Unix a un diskette y listo.  Su apuesta era que el ayudante no se iba a dar el trabajo de revisar los archivos, sólo era cosa de usar los nombres adecuados, y así era como el archivo podía incluir los artículos de optimización y gráficos de Michael Abrash y nadie se enteraría.

Y así fue!

Con el tiempo, comenzamos a descargar información de hardware de PC para aprender a programar la VGA, Adlib, Soundblaster, llamadas a la BIOS, llamadas a DOS, luego comenzó a surgir información más interesante aún como partes no documentadas de la VGA, el famoso ModoX, smooth scrolling, algoritmos de sonido, hacking, se pueden imaginar el paraíso que eso significaba para alguien que antes sólo contaba con el manual de usuario del PC.

Y como sucede muchas veces – sino pregúntenle a Penta y SQM – con el tiempo agarramos confianza y comenzamos a descargar tablaturas, demos (de la demoscene), juegos shareware, música en formato MOD, MIDI… todo era tarea. Tarea, tarea, tarea.

Pinche aquí para ver el vídeo

La bienvenida

Hasta que un día, Max llegó al laboratorio Unix y ahi lo estaba esperando Arcadia junto al resto del olimpo. Pero no estaban solos, ya que con ellos se encontraba el ser superior (literalmente): Horst von Brand dueño y señor del laboratorio Unix, fundador y prócer de ése y otros imperios. Si Horst quería hablar contigo era porque habías hecho algo muy bueno, o muy malo. Siendo éste último nuestro caso.

Tal honorable bienvenida tenía un propósito claro y preciso: Presentar ante Max los logs de transferencia de todas las “tareas” a la fecha, que ya sumaban megas y megas internacionales, multiplicados por su equivalente monetario.  Según cuenta la historia, gran porcentaje del millonario costo por transferencia de datos de toda la universidad se debían a nuestras “tareas”.



Con Max en el verano de 1994


El resto de la historia sólo la conoce Max y el comité de bienvenida.  Por supuesto, su cuenta fue cerrada y a la larga la mía también, pero por otros motivos que podré contar en un nuevo post.

i5esp101@loa, i5meceled@loa, hydefus@inf, human@inf nunca los olvidaremos.

Y qué hice con toda la información descargada? Todo lo que pude y quise! Desde algunos experimentos como rutinas de reproducción de mods que nunca fueron utilizadas, hasta una aplicación para reproducir músicas de MSX (computador de 8 bits) en nuestros PCs.  Incluso las rutinas de sonido que pude ver en esa época me ayudaron años después a hacer DeFX – que más tarde me llevó a tvnauta – y después resucitó como MusicTrans.  Las partes en C de MusicTrans y RetroX le deben mucho a lo que aprendí en esos años también.

Cuando uno recuerda lo que costaba aprender a programar en ese tiempo comparado con lo que tenemos hoy, no hay excusas si quieres hacer algo interesante, las respuestas están al alcance de un click!

PD: Muchos detalles han sido omitidos para facilitar la lectura. Algunos eventos o especificaciones pueden ser no tan precisos, lo serían si no hubiese esperado 20 años para escribir este post.

Como teaser, les dejo un demo de la aplicación de música, a la que le dedicaré un próximo artículo.

Pinche aquí para ver el vídeo

16 de April de 2017


Por puro descuido no he hablado aqui del trabajo que estamos haciendo en la Oficina de Software Libre de UNIA para la Universidad de Almería. En parte es otra de mis bravuconadas burras, otro intento de forzar la máquina social con aspiraciones de progreso. Por otro creo realmente que es una actividad interesante, otro paso para contribuir a la modernización de la Universidad de Almería y la expansión de conocimientos y aptitudes para los estudiantes de informática y otros estudios tecnológicos en esta universidad. También creemos que nuestra aproximación es relativamente novedosa, aunque hay mucho más que podría hacerse.

En la práctica las actividades que estamos llevando a cabo consisten en:

  • prácticas de empresa curriculares
  • congresos

y en cuanto a congresos, llevamos una racha magnífica:

y hay algún otro más en consideración.

En ningún caso nada de esto podría ser realidad sin el compromiso de la asociación UNIA y el apoyo de la propia Universidad de Almería.

Otra líneas de trabajo que debemos desarollar son:

  • cursos y talleres de tecnologías opensource;
  • promover la participación de estudiantes en convocatorias como Google Summer of Code; se da la buena fortuna de que en Almería hay dos proyectos software que participan como anfitriones: P2PSP y MRPT;
  • promover la participación en concursos universitarios de programación como el certamen de proyectos libres de la UGR y el concurso universitario de software libre;
  • promover que prácticas de asignaturas, proyectos de fin de grado y de doctorado se realicen como proyectos opensource o, aún mejor, en comunidades de desarrollo ya existentes.

Si tenéis interés en saber más o participar dirigíos por favor a nuestro subforo de discusión.

En fin, estamos trabajando en ello.

21 de March de 2017

Cómo hacer un backup de tus mensajes de Telegram

Supongamos que quieres hacer un backup de los mensajes de un grupo (canal, supergrupo, conversaciones privadas, usuarios, whatever…) de Telegram.  Puede haber distintas razones para ello: simple copia de seguridad para estar tranquilo, porque vas a irte de un grupo, porque van a cerrar el mismo… o, como en mi caso, con fines estadísticos.

Hasta donde yo sé, Telegram no ofrece herramientas directas para esta labor, pero al usar un protocolo abierto y bien documentado, hay gente que se ha pegado el curro de implementar aplicaciones cliente desde cero. Una de esas aplicaciones es tg, escrita en C, software libre y ejecutable desde la línea de comandos. tg es “simplemente” un cliente de Telegram, pero ofrece -entre muchas otras funcionalidades interesantes- la opción de ser ejecutado en modo servidor. De esta forma, mediante scripting, nos será posible obtener toda la información que tengamos almacenada en esta plataforma de mensajería instantánea. Pero no adelantemos acontecimientos 🙂 Lo primero será compilar esta bestia, cosa nada fácil (al menos en OSX, donde estoy ahora mismo).

Descargamos tg y seguimos los pasos de compilación e instalación. En resumen:

git clone --recursive
cd tg

En OSX necesitarás instalar dependencias usando brew como sistema de gestor de paquetes:
brew install libconfig readline lua python libevent jansson
Verás multitud de warnings relacionados con posibles conflictos de versiones (sqlite, readline, SSL…). Brew nos avisa de que OSX ya viene con algunas de esas librerías instaladas y que si queremos usar o compilar las de brew tendremos que cambiar las variables de entorno (PATH, LDFLAGS , CFLAGS, PKG_CONFIG_PATH). Bueno, por ahora haremos caso omiso a esos warnings. Y pasamos al siguiente paso:


Idealmente te debería mostrar un mensaje de que todo va bien.

Y llegamos al make, donde seguramente obtengas un error indicando algo similar a:

lua-tg.c:664:27: error: unused function 'get_peer' [-Werror,-Wunused-function]

Un error en la línea get_peer. Si es el caso, tendrás que aplicar este parche, así:

patch < lua.patch
Patching file lua-tg.c

y volver a ejecutar el make. Si todo va bien, obtendrás un mensaje como el siguiente y de premio, el ejecutable telegram-cli en la carpeta bin.

La primera vez que ejecutes tg tendrás que asociarlo con tu número de teléfono. No olvides indicar el prefijo +34 si estás en España. Te llegará un PIN de verificación al teléfono (a través de un mensaje Telegram). Es el PIN que deberás indicar en la línea CALL correspondiente.
u026627:bin juanan$ ./telegram-cli
Telegram-cli version 1.4.1, Copyright (C) 2013-2015 Vitaly Valtman
Telegram-cli comes with ABSOLUTELY NO WARRANTY; for details type show_license'.
This is free software, and you are welcome to redistribute it
under certain conditions; type
show_license' for details.
Telegram-cli uses libtgl version 2.1.0
Telegram-cli includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit. (
I: config dir=[/Users/juanan/.telegram-cli]
[/Users/juanan/.telegram-cli] created
[/Users/juanan/.telegram-cli/downloads] created
phone number: +34XXXXXXXXX
code ('CALL' for phone code): XXXXX
User Juanan Pereira updated flags
User Juanan Pereira online (was online [2017/03/20 14:15:45])

Ahora dispondrás de multitud de comandos (teclea help para ver la lista completa). Puedes probar que todo ha ido bien tecleando comandos como get_self, para obtener información sobre tu cuenta en Telegram, contact_list para ver un listado con el nombre de tus contactos o channel_list para ver el nombre de los canales y grupos a los que estás suscrito.

Es interesante remarcar que tg dispone de autocompletamiento en la línea de comandos (soporte readline), por lo que basta con teclear las primeras letras del comando y pulsar TAB para que rellene el resto. Para salir, teclear quit.

Para poder cumplir con el objetivo marcado al comienzo de este post (backup de los mensajes de un grupo) necesitaremos instalar un script que maneje tg de forma automática. Se trata de telegram-history-dump, escrito en Ruby:

git clone

Debemos asegurar que disponemos de una versión reciente de Ruby (>= 2.0).

ruby --version
ruby 2.0.0p648 (2015-12-16 revision 53162) [universal.x86_64-darwin16]

Ahora podemos decidir hacer backup de todo o sólo de algunos grupos en concreto. Yo sólo necesito backup de un super-grupo, así que edito el fichero de configuración del script config.yaml e indico en la sección [backup_supergroups] que quiero sólo el grupo “MI_GRUPO”. En el resto de secciones, indico null.

Para finalizar, lanzamos tg en modo servidor:

./telegram-cli --json -P 9009

(escuchando en el puerto 9009 e intercambiando mensajes en formato JSON)

y ¡por fin! lanzamos el script de backup:

u026627:telegram-history-dump juanan$ ruby telegram-history-dump.rb
I, [2017-03-20T14:27:23.381792 #24590]  INFO -- : Attaching to telegram-cli control socket at localhost:9009
I, [2017-03-20T14:27:23.849295 #24590]  INFO -- : Skipping XXX dialogs: .....
I, [2017-03-20T14:27:23.849485 #24590]  INFO -- : Backing up 1 dialogs: "MI_GRUPO"
I, [2017-03-20T14:27:23.849783 #24590]  INFO -- : Dumping "MI_GRUPO" (range 1-100)
I, [2017-03-20T14:27:24.854833 #24590]  INFO -- : Dumping "MI_GRUPO" (range 101-200)
I, [2017-03-20T14:27:25.860089 #24590]  INFO -- : Dumping "MI_GRUPO" (range 201-300)
I, [2017-03-20T14:27:26.864448 #24590]  INFO -- : Dumping "MI_GRUPO" (range 301-400)
I, [2017-03-20T14:27:27.869046 #24590]  INFO -- : Dumping "MI_GRUPO" (range 401-500)
I, [2017-03-20T14:27:28.872592 #24590]  INFO -- : Dumping "MI_GRUPO" (range 501-600)
I, [2017-03-20T14:27:28.977374 #24590]  INFO -- : Finished

Tras unos segundos (veremos el progreso en la terminal donde tengamos lanzado telegram-cli) obtendremos el resultado en la carpeta output/ . Si disponemos del comando jq (si no, lo instalamos con brew install jq) podremos visualizar el JSON resultante cómodamente así:

cat output/json/MI_GRUPO.jsonl | jq

Espero que os haya sido de interés 🙂


20 de March de 2017

WebKitGTK+ 2.16

The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

Memory consumption

After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

CSS Grid Layout

Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.


The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

Hardware acceleration policy

Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

Network proxy settings

Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

Private browsing

WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

Website data

WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

Dynamically added forms

Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

Custom print settings

The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

Notification improvements

Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

Debugging tools

Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

Memory sampler

This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036

The files contain a list of sample reports like this one:

Timestamp                          1490004807
Total Program Bytes                1960214528
Resident Set Bytes                 84127744
Resident Shared Bytes              68661248
Text Bytes                         4096
Library Bytes                      0
Data + Stack Bytes                 87068672
Dirty Bytes                        0
Fast Malloc In Use                 86466560
Fast Malloc Committed Memory       86466560
JavaScript Heap In Use             0
JavaScript Heap Committed Memory   49152
JavaScript Stack Bytes             2472
JavaScript JIT Bytes               8192
Total Memory In Use                86477224
Total Committed Memory             86526376
System Total Bytes                 16729788416
Available Bytes                    5788946432
Shared Bytes                       1037447168
Buffer Bytes                       844214272
Total Swap Bytes                   1996484608
Available Swap Bytes               1991532544

Resource usage overlay

The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

We plan to add more information to the overlay in the future like memory cache status.

10 de February de 2017

Accelerated compositing in WebKitGTK+ 2.14.4

WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

  • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
  • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
  • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

We really hope this new release and the upcoming 2.16 will work much better for everybody.

02 de February de 2017

Going to FOSDEM!

It’s been two years since the last time I went to FOSDEM, but it seems that this year I’m going to be there again and, after having traveled to Brussels a few times already by plane and train, this year I’m going by car!: from home to the Euro tunnel and then all the way up to Brussels. Let’s see how it goes.


As for the conference, I don’t have any particular plan other than going to some keynotes and probably spending most of my time in the Distributions and the Desktops devrooms. Well, and of course joining other GNOME people at A La Bécasse, on Saturday night.

As you might expect, I will have my Endless laptop with me while in the conference, so feel free to come and say “hi” in case you’re curious or want to talk about that if you see me around.

At the moment, I’m mainly focused on developing and improving our flatpak story, how we deliver apps to our users via this wonderful piece of technology and how the overall user experience ends up being, so I’d be more than happy to chat/hack around this topic and/or about how we integrate flatpak in EndlessOS, the challenges we found, the solutions we implemented… and so forth.

That said, flatpak is one of my many development hats in Endless, so be sure I’m open to talk about many other things, including not work-related ones, of course.

Now, if you excuse me, I have a bag to prepare, an English car to “adapt” for the journey ahead and, more importantly, quite some hours to sleep. Tomorrow it will be a long day, but it will be worth it.

See you at FOSDEM!

21 de December de 2016

Acabemos con los intersticiales


Vaya palabro. “Interstitials”. Intersticiales en español. Anuncios intersticiales; que para la RAE serían aquellos que ocupan los intersticios, pero que, realmente, son anuncios que ocupan toda la pantalla y son especialmente molestos porque,

  • Impiden visualizar la página que queremos ver. Vamos, que molestan, y mucho más, que los pop-ups de toda la vida.
  • Incitan al click fraudulento: La mayor parte de las veces que pulsamos sobre el anuncio, no es para acceder al producto anunciado sino para cerrar el mismo. El botón para cancelarlo suele requerir de una especial destreza (o puntería) del usuario sobre su pantalla multitáctil… a menos que tengas los dedos del tamaño de la cabeza de un alfiler.

Anuncio Intersticial, imagen de Google

Pues bien, hace ahora más de 4 meses, Google anunció que iba a penalizar el uso de anuncios intersticiales en las búsquedas desde móvil a partir de enero de 2017, algo que ya está a la vuelta de la esquina.

Es bueno recordarlo porque, a día de hoy, la mayor parte de las páginas de actualidad en Internet siguen incorporando este tipo de anuncios como si nada, haciendo muy incómoda la lectura, sobre todo en móviles, donde deshacerse del anuncio resulta complicado y por lo que, muchas veces, un servidor decide abandonar el medio de comunicación elegido para irme a uno alternativo que me informe más y me moleste menos.

Aún me queda la esperanza de que esta penalización del buscador realmente consiga su efecto. Mientras tanto, seguiremos haciendo uso extensivo y abusivo de los bloqueadores de anuncios, pese a los avisos de determinados medios de que eso les hace daño.

15 de December de 2016

Thu 2016/Dec/15

Igalia is hiring. We're currently interested in Multimedia and Chromium developers. Check the announcements for details on the positions and our company.

08 de December de 2016

Oh, the security!

Under public domain

There’s been lately lots of fuzz around Tracker as a security risk, as the de-facto maintainer of Tracker I feel obliged to comment. I’ll comment purely on Tracker bits, I will not comment on other topics that OTOH were not as debated but are similarly affected, like thumbnailing, previewing, autodownloading, or the state of maintenance of gstreamer-plugins-bad.

First of all, I’m glad to tell that Tracker now sandboxes its extractors, so its only point of exposure to exploits is now much more constrained, leaving very little room for malicious code to do anything harmful. This fix has been backported to 1.10 and 1.8, and new tarballs rolled, everyone rejoice.

Now, the original post raising the dust storm certainly achieved its dramatic effect, despite Tracker not doing anything insecure besides calling a closed well known set of 3rd party libraries (which after all are most often installed from the same trusted sources that Tracker comes from), it’s been on the “security” spotlight across several bugs/MLs/sites with different levels of accuracy, I’ll publicly comment on some of these assertions I’ve seen in the last days.

This is a design flaw in Tracker!

Tracker has always performed metadata extraction in a separate process for stability reasons, which means we already count on this process possibly crashing and burning away.

Tracker was indeed optimistic at the possible reasons why that might happen, but precisely thanks to Tracker design it’s been a breeze to isolate the involved parts. A ~200 lines change hardly counts as a redesign.

All of tracker daemons are inherently insecure!, or its funnier cousin Tracker leaks all collected information to the outside world!

This security concern has only raised because of using 3rd party parsers (well, in the case of the GStreamer vulnerability in question, decoders, why a parsing facility like GstDiscoverer triggers decoding is another question worth asking), and this parsing of content happens in exactly one place in your common setup: tracker-extract.

Let’s dissect a bit Tracker daemons’ functionality:

  • tracker-store: It is the manager of your user Tracker database, it connects to the session bus and gets readwrite access to a database in ~/.cache. Also does notification of changes in the database through the user bus.
  • tracker-miner-fs: It’s the process watching for changes in filesystem, and filling in the basic information that can be extracted from shared-mime-info sniffing (which usually involves matching some bytes inside the file, little conditionals involved), struct dirent and struct stat.
  • tracker-extract: Guilty as charged! It receives the notification of changes, and is basically a loop that picks the next unprocessed file, runs it through 3rd party parsers, sends a series of insert clauses over dbus, and picks the next file. Wash, rinse, repeat.
  • tracker-miner-applications: A very simplified version of tracker-miner-fs that just parses the keyfiles in various .desktop file locations.
  • tracker-miner-rss: This might be another potential candidate, as it parses “arbitrary” content through libgrss. However, it must be configured by the user, it otherwise has no RSS feeds to read from. I’ll take the possibility of hijacking famous blogs and news sites to hack through tracker-miner-rss as remote enough to fix it after a breathe.

So, taking aside per-parser specifics, tracker consists of one database stored under 0600 permissions, information being added to it through requests in the dbus session, and being read by apps from a readonly handle created by libtracker-sparql, the read and write channels can be independently isolated.

If you are really terrified by your user information being stored inside your homedir, or can’t sleep thinking of your session bus as a dark alley, you certainly want to run all your applications in a sandbox, they won’t be able to poke on org.freedesktop.Tracker1.Store or sniff on ~/.cache that way.

But again, there is nothing that makes Tracker as a whole inherently insecure, at least not more than the average session bus service, or the average application storing data in your homedir. Everything that could be distrusted is down to specific parsers, and that is anything but inherent in Tracker.

Tracker-extract runs plugins and is thus unsafe!

No, tracker-extract has a modular design, but is not extensible itself. It reads a closed set of modules implemented by Tracker from a folder that should be in /usr/lib/tracker-1.0 if your setup is right. The API of these modules is private and subject to change. If anything manages to add or modify modules there, you’ve got way worse concerns.

Now, one of these extractor modules uses GStreamer, which to my knowledge is still the go-to library if you want anything multimedia on linux, and it happens to open an arbitrary list of plugins itself, that is beyond Tracker control or extent.

It should be written in rust!

What do we gain from that? As said, tracker-extract is in essence a very simple loop, all the scary stuff is handled by external libraries that will still be implemented in “unsafe languages”, rust is just as useful as gift paper to wrap this.

Extraction should be further isolated into another process!

There are good reasons not to do that. Having two separate processes running completely interlocked tasks (one process can’t do anything until the other is finished) is pretty much a worst case for scheduling, context switching, performance and battery life at once.

Furthermore, such tertiary service would need exactly the same whitelisted syscalls and exactly the same number of ways out of the process. So I think I won’t attract the “Tracker is heavy/slow” zealots for this time… There is a throwaway process, and it is tracker-extract.

The silver linings

Tracker is already more secure, now lets silence the remaining noise. Quite certainly one area of improvement is Flatpak integration, so sandboxed applications can launch isolated Tracker instances that run under the same sandboxed environment, and extracted data is only visible within the sandbox.

This is achievable with current Tracker design, however the “Tracker as a service” approach sounds excessive with this status quo, tracker needs to adapt to being usable as a local store, and it needs to go through being more of a generic SPARQL endpoint before.

But this is just adapting to the new times, Flatpak is relatively young and Tracker is slow moving, so they haven’t met yet. But there is a pretty clear roadmap, and we’ll get there.

10 de November de 2016

Web Engines Hackfest 2016

From September 26th to 28th we celebrated at the Igalia HQ the 2016 edition of the Web Engines Hackfest. This year we broke all records and got participants from the three main companies behind the three biggest open source web engines, say Mozilla, Google and Apple. Or course, it was not only them, we had some other companies and ourselves. I was active part of the organization and I think we not only did not get any complain but people were comfortable and happy around.

We had several talks (I included the slides and YouTube links):

We had lots and lots of interesting hacking and we also had several breakout sessions:

  • WebKitGTK+ / Epiphany
  • Servo
  • WPE / WebKit for Wayland
  • Layout Models (Grid, Flexbox)
  • WebRTC
  • JavaScript Engines
  • MathML
  • Graphics in WebKit

What I did during the hackfest was working with Enrique and Žan to advance on reviewing our downstream implementation of our GStreamer based of Media Source Extensions (MSE) in order to land it as soon as possible and I can proudly say that we did already (we didn’t finish at the hackfest but managed to do it after it). We broke the bots and pissed off Michael and Carlos but we managed to deactivate it by default and continue working on it upstream.

So summing up, from my point of view and it is not only because I was part of the organization at Igalia, based also in other people’s opinions, I think the hackfest was a success and I think we will continue as we were or maybe growing a bit (no spoilers!).

Finally I would like to thank our gold sponsors Collabora and Igalia and our silver sponsor Mozilla.

05 de October de 2016

Frogr 1.2 released

Of course, just a few hours after releasing frogr 1.1, I’ve noticed that there was actually no good reason to depend on gettext 0.19.8 for the purposes of removing the intltool dependency only, since 0.19.7 would be enough.

So, as raising that requirement up to 0.19.8 was causing trouble to package frogr for some distros still in 0.19.7 (e.g. Ubuntu 16.04 LTS), I’ve decided to do a quick new release and frogr 1.2 is now out with that only change.

One direct consequence is that you can now install the packages for Ubuntu from my PPA if you have Ubuntu Xenial 16.04 LTS or newer, instead of having to wait for Ubuntu Yakkety Yak (yet to be released). Other than that 1.2 is exactly the same than 1.1, so you probably don’t want to package it for your distro if you already did it for 1.1 without trouble. Sorry for the noise.


Frogr 1.1 released

After almost one year, I’ve finally released another small iteration of frogr with a few updates and improvements.

Screenshot of frogr 1.1

Not many things, to be honest, bust just a few as I said:

  • Added support for flatpak: it’s now possible to authenticate frogr from inside the sandbox, as well as open pictures/videos in the appropriate viewer, thanks to the OpenURI portal.
  • Updated translations: as it was noted in the past when I released 1.0, several translations were left out incomplete back then. Hopefully the new version will be much better in that regard.
  • Dropped the build dependency on intltool (requires gettext >= 0.19.8).
  • A few bugfixes too and other maintenance tasks, as usual.

Besides, another significant difference compared to previous releases is related to the way I’m distributing it: in the past, if you used Ubuntu, you could configure my PPA and install it from there even in fairly old versions of the distro. However, this time that’s only possible if you have Ubuntu 16.10 “Yakkety Yak”, as that’s the one that ships gettext >= 0.19.8, which is required now that I removed all trace of intltool (more info in this post).

However, this is also the first time I’m using flatpak to distribute frogr so, regardless of which distribution you have, you can now install and run it as long as you have the org.gnome.Platform/x86_64/3.22 stable runtime installed locally. Not too bad! :-). See more detailed instructions in its web site.

That said, it’s interesting that you also have the portal frontend service and a backend implementation, so that you can authorize your flickr account using the browser outside the sandbox, via the OpenURI portal. If you don’t have that at hand, you can still used the sandboxed version of frogr, but you’d need to copy your configuration files from a non-sandboxed frogr (under ~/.config/frogr) first, right into ~/.var/app/org.gnome.Frogr/config, and then it should be usable again (opening files in external viewers would not work yet, though!).

So this is all, hope it works well and it’s helpful to you. I’ve just finished uploading a few hundreds of pictures a couple of days ago and it seemed to work fine, but you never know… devil is in the detail!


30 de September de 2016

Cross-compiling WebKit2GTK+ for ARM

I haven’t blogged in a while -mostly due to lack of time, as usual- but I thought I’d write something today to let the world know about one of the things I’ve worked on a bit during this week, while remotely attending the Web Engines Hackfest from home:

Setting up an environment for cross-compiling WebKit2GTK+ for ARM

I know this is not new, nor ground-breaking news, but the truth is that I could not find any up-to-date documentation on the topic in a any public forum (the only one I found was this pretty old post from the time WebKitGTK+ used autotools), so I thought I would devote some time to it now, so that I could save more in the future.

Of course, I know for a fact that many people use local recipes to cross-compile WebKit2GTK+ for ARM (or simply build in the target machine, which usually takes a looong time), but those are usually ad-hoc things and hard to reproduce environments locally (or at least hard for me) and, even worse, often bound to downstream projects, so I thought it would be nice to try to have something tested with upstream WebKit2GTK+ and publish it on,

So I spent some time working on this with the idea of producing some step-by-step instructions including how to create a reproducible environment from scratch and, after some inefficient flirting with a VM-based approach (which turned out to be insanely slow), I finally settled on creating a chroot + provisioning it with a simple bootstrap script + using a simple CMake Toolchain file, and that worked quite well for me.

In my fast desktop machine I can now get a full build of WebKit2GTK+ 2.14 (or trunk) in less than 1 hour, which is pretty much a productivity bump if you compare it to the approximately 18h that takes if I build it natively in the target ARM device I have 🙂

Of course, I’ve referenced this documentation in, but if you want to skip that and go directly to it, I’m hosting it in a git repository here:

Note that I’m not a CMake expert (nor even close) so the toolchain file is far from perfect, but it definitely does the job with both the 2.12.x and 2.14.x releases as well as with the trunk, so hopefully it will be useful as well for someone else out there.

Last, I want to thanks the organizers of this event for making it possible once again (and congrats to Igalia, which just turned 15 years old!) as well as to my employer for supporting me attending the hackfest, even if I could not make it in person this time.

Endless Logo

20 de September de 2016

WebKitGTK+ 2.14

These six months has gone so fast and here we are again excited about the new WebKitGTK+ stable release. This is a release with almost no new API, but with major internal changes that we hope will improve all the applications using WebKitGTK+.

The threaded compositor

This is the most important change introduced in WebKitGTK+ 2.14 and what kept us busy for most of this release cycle. The idea is simple, we still render everything in the web process, but the accelerated compositing (all the OpenGL calls) has been moved to a secondary thread, leaving the main thread free to run all other heavy tasks like layout, JavaScript, etc. The result is a smoother experience in general, since the main thread is no longer busy rendering frames, it can process the JavaScript faster improving the responsiveness significantly. For all the details about the threaded compositor, read Yoon’s post here.

So, the idea is indeed simple, but the implementation required a lot of important changes in the whole graphics stack of WebKitGTK+.

  • Accelerated compositing always enabled: first of all, with the threaded compositor the accelerated mode is always enabled, so we no longer enter/exit the accelerating compositing mode when visiting pages depending on whether the contents require acceleration or not. This was the first challenge because there were several bugs related to accelerating compositing being always enabled, and even missing features like the web view background colors that didn’t work in accelerated mode.
  • Coordinated Graphics: it was introduced in WebKit when other ports switched to do the compositing in the UI process. We are still doing the compositing in the web process, but being in a different thread also needs coordination between the main thread and the compositing thread. We switched to use coordinated graphics too, but with some modifications for the threaded compositor case. This is the major change in the graphics stack compared to the previous one.
  • Adaptation to the new model: finally we had to adapt to the threaded model, mainly due to the fact that some tasks that were expected to be synchronous before became asyncrhonous, like resizing the web view.

This is a big change that we expect will drastically improve the performance of WebKitGTK+, especially in embedded systems with limited resources, but like all big changes it can also introduce new bugs or issues. Please, file a bug report if you notice any regression in your application. If you have any problem running WebKitGTK+ in your system or with your GPU drivers, please let us know. It’s still possible to disable the threaded compositor in two different ways. You can use the environment variable WEBKIT_DISABLE_COMPOSITING_MODE at runtime, but this will disable accelerated compositing support, so websites requiring acceleration might not work. To disable the threaded compositor and bring back the previous model you have to recompile WebKitGTK+ with the option ENABLE_THREADED_COMPOSITOR=OFF.


WebKitGTK+ 2.14 is the first release that we can consider feature complete in Wayland. While previous versions worked in Wayland there were two important features missing that made it quite annoying to use: accelerated compositing and clipboard support.

Accelerated compositing

More and more websites require acceleration to properly work and it’s now a requirement of the threaded compositor too. WebKitGTK+ has supported accelerated compositing for a long time, but the implementation was specific to X11. The main challenge is compositing in the web process and sending the results to the UI process to be rendered on the actual screen. In X11 we use an offscreen redirected XComposite window to render in the web process, sending the XPixmap ID to the UI process that renders the window offscreen contents in the web view and uses XDamage extension to track the repaints happening in the XWindow. In Wayland we use a nested compositor in the UI process that implements the Wayland surface interface and a private WebKitGTK+ protocol interface to associate surfaces in the UI process to the web pages in the web process. The web process connects to the nested Wayland compositor and creates a new surface for the web page that is used to render accelerated contents. On every swap buffers operation in the web process, the nested compositor in the UI process is automatically notified through the Wayland surface protocol, and  new contents are rendered in the web view. The main difference compared to the X11 model, is that Wayland uses EGL in both the web and UI processes, so what we have in the UI process in the end is not a bitmap but a GL texture that can be used to render the contents to the screen using the GPU directly. We use gdk_cairo_draw_from_gl() when available to do that, falling back to using glReadPixels() and a cairo image surface for older versions of GTK+. This can make a huge difference, especially on embedded devices, so we are considering to use the nested Wayland compositor even on X11 in the future if possible.


The WebKitGTK+ clipboard implementation relies on GTK+, and there’s nothing X11 specific in there, however clipboard was read/written directly by the web processes. That doesn’t work in Wayland, even though we use GtkClipboard, because Wayland only allows clipboard operations between compositor clients, and web processes are not Wayland clients. This required to move the clipboard handling from the web process to the UI process. Clipboard handling is now centralized in the UI process and clipboard contents to be read/written are sent to the different WebKit processes using the internal IPC.

Memory pressure handler

The WebKit memory pressure handler is a monitor that watches the system memory (not only the memory used by the web engine processes) and tries to release memory under low memory conditions. This is quite important feature in embedded devices with memory limitations. This has been supported in WebKitGTK+ for some time, but the implementation is based on cgroups and systemd, that is not available in all systems, and requires user configuration. So, in practice nobody was actually using the memory pressure handler. Watching system memory in Linux is a challenge, mainly because /proc/meminfo is not pollable, so you need manual polling. In WebKit, there’s a memory pressure handler on every secondary process (Web, Plugin and Network), so waking up every second to read /proc/meminfo from every web process would not be acceptable. This is not a problem when using cgroups, because the kernel interface provides a way to poll an EventFD to be notified when memory usage is critical.

WebKitGTK+ 2.14 has a new memory monitor, used only when cgroups/systemd is not available or configured, based on polling /proc/meminfo to ensure the memory pressure handler is always available. The monitor lives in the UI process, to ensure there’s only one process doing the polling, and uses a dynamic poll interval based on the last system memory usage to read and parse /proc/meminfo in a secondary thread. Once memory usage is critical all the secondary processes are notified using an EventFD. Using EventFD for this monitor too, not only is more efficient than using a pipe or sending an IPC message, but also allows us to keep almost the same implementation in the secondary processes that either monitor the cgroups EventFD or the UI process one.

Other improvements and bug fixes

Like in all other major releases there are a lot of other improvements, features and bug fixes. The most relevant ones in WebKitGTK+ 2.14 are:

  • The HTTP disk cache implements speculative revalidation of resources.
  • The media backend now supports video orientation.
  • Several bugs have been fixed in the media backend to prevent deadlocks when playing HLS videos.
  • The amount of file descriptors that are kept open has been drastically reduced.
  • Fix the poor performance with the modesetting intel driver and DRI3 enabled.

24 de August de 2016

Wayland ♡ drawing tablets

So this is finally happening. The result of much work all through the stack from several dedicated individuals (You know who you are!) started lining up during the past few months and now is hitting master. Early in the cycle I blogged about stylus support being merged, based on the first version of the tablet protocols. Now I have the pleasure to declare GTK+ tablet support on Wayland feature complete.

As predicted in the earlier post, a second version of the protocol came through, bringing pad support for clients. What is a pad? It’s the set of buttons and other (often) tactile sensors that tablets have around the stylus-sensitive area. These devices are rather uncanny from an input management perspective: unlike mice/keyboard, those buttons/sensors don’t have an associated action that has been chiseled on the key or through decades of user interaction paradigms, they are rather meant to be user-mappable. Also, despite buttons being buttons and sensors being essentially one-dimensional axes, all resemblance with a mouse/stylus is purely coincidental, focus management is more similar to keyboards (actually, the same), and is not directly related to the stylus.

In GNOME, this action-mapping has been traditionally done in gnome-settings-daemon. X11 clients have been usually completely unaware of pad events, partly because of the oddities pointed out above. So g-s-d kept a passive grab on pad buttons and would translate those into keycombos. This has many shortcomings though, keycombos are far from standard, accelerators are translatable, … For Wayland, we have the opportunity to fix all these shortcomings, and make pads a first class citizen.

So the right thing to do here, if we want to univocally map actions to pad features, is delegating the action mapping to the client. The other side of the coin is providing proper feedback about the actions. In GNOME we have session-wide OSDs to display the pad mapping, and the wayland protocol has been tailored to cover an usecase like this, so applications can directly participate in filling in this info, this is how the end result looks:

Screenshot* Note to supervillains: Actions are stubs

There’s also changes scheduled (I know I’m late, who doesn’t like pressure!) to gnome-control-center to centralize the management of styli for all known/plugged tablets, following the very nice mockups from Jimmac.

How is this exposed in GTK+?

gtk+ master introduced GtkPadController, it’s a GdkEventController subclass that will manage pad events, from one pad specifically, or for all at once. The controller takes a GActionGroup and a set of pad action entries, each defining an action name to activate, a simple example:

controller = gtk_pad_controller_new (GTK_WINDOW (app_window),
                                     G_ACTION_GROUP (app_window),
                                     pad_device /* May be NULL for all */);
gtk_pad_controller_set_action (controller,
                               1, /* Second button, they're 0-indexed */
                               -1, /* All modes */
                               _("Frobnicate file"),

And pressing that button on that pad will trigger the related action from the GActionGroup. The given label is what you end up seeing in the OSD. I expect this object to be eventually made useful on other platforms than Wayland (X11 is the first objective), although only the event-to-action mapping, and not entirely exempt of platform-specific issues.

This is going to be messy! why does every app need configuring? Why not a global action map?

The Wayland protocol aims to be intemporal, the trouble would only begin at defining a good enough initial set of global actions, and still not all actions might make sense for every app. This allows every application to implement pad actions, no matter how high or low level they are. Configurability is also optional, although the first apps that will surely come to your head will implement some, they already did for styli and other input features.

Is this for me?

Depends :), if you see your app might be involved in artists or designers’ workflow, it well could. Providing a sensible minimal set of actions to perform, even if hardcoded (think those you’d like shortcuts for, and were away from the keyboard), could help lots in integrating this previously neglected device, to the point that it might feel widely useful besides the niche drawing application case. By using GAction underneath, there’s also little left for you to test, you just need to ensure the action performs the intended effect when activated.

11 de August de 2016

Going to GUADEC 2016

I am at the airport getting ready to board beginning my trip to Karlsruhe for GUADEC 2016. I’ll be there with my colleague Michael Catanzaro.Please talk to us if you are interested in browsers or Igalia.

13 de April de 2016

Chromium Browser on xdg-app

Last week I had the chance to attend for 3 days the GNOME Software Hackfest, organized by Richard Hughes and hosted at the brand new Red Hat’s London office.

And besides meeting new people and some old friends (which I admit to be one of my favourite aspects about attending these kind of events), and discovering what it’s now my new favourite place for fast-food near London bridge, I happened to learn quite a few new things while working on my particular personal quest: getting Chromium browser to run as an xdg-app.

While this might not seem to be an immediate need for Endless right now (we currently ship a Chromium-based browser as part of our OSTree based system), this was definitely something worth exploring as we are now implementing the next version of our App Center (which will be based on GNOME Software and xdg-app). Chromium updates very frequently with fixes and new features, and so being able to update it separately and more quickly than the OS is very valuable.

Endless OS App Center
Screenshot of Endless OS’s current App Center

So, while Joaquim and Rob were working on the GNOME Software related bits and discussing aspects related to Continuous Integration with the rest of the crowd, I spent some time learning about xdg-app and trying to get Chromium to build that way which, unsurprisingly, was not an easy task.

Fortunately, the base documentation about xdg-app together with Alex Larsson’s blog post series about this topic (which I wholeheartedly recommend reading) and some experimentation from my side was enough to get started with the whole thing, and I was quickly on my way to fixing build issues, adding missing deps and the like.

Note that my goal at this time was not to get a fully featured Chromium browser running, but to get something running based on the version that we use use in Endless (Chromium 48.0.2564.82), with a couple of things disabled for now (e.g. chromium’s own sandbox, udev integration…) and putting, of course, some holes in the xdg-app configuration so that Chromium can access the system’s parts that are needed for it to function (e.g. network, X11, shared memory, pulseaudio…).

Of course, the long term goal is to close as many of those holes as possible using Portals instead, as well as not giving up on Chromium’s own sandbox right away (some work will be needed here, since `setuid` binaries are a no-go in xdg-app’s world), but for the time being I’m pretty satisfied (and kind of surprised, even) that I managed to get the whole beast built and running after 4 days of work since I started :-).

But, as Alberto usually says… “screencast or it didn’t happen!”, so I recorded a video yesterday to properly share my excitement with the world. Here you have it:

[VIDEO: Chromium Browser running as an xdg-app]

As mentioned above, this is work-in-progress stuff, so please hold your horses and manage your expectations wisely. It’s not quite there yet in terms of what I’d like to see, but definitely a step forward in the right direction, and something I hope will be useful not only for us, but for the entire Linux community as a whole. Should you were curious about the current status of the whole thing, feel free to check the relevant files at its git repository here.

Last, I would like to finish this blog post saying thanks specially to Richard Hughes for organizing this event, as well as the GNOME Foundation and Red Hat for their support in the development of GNOME Software and xdg-app. Finally, I’d also like to thank my employer Endless for supporting me to attend this hackfest. It’s been a terrific week indeed… thank you all!

Credit to Georges Stavracas

Credit to Georges Stavracas

06 de April de 2016

GTK+ Wayland tablet support is merged

As for excuses go for blowing the lint off my blog, this is a pretty good one :). Wayland tablet support is something that got really close to being merged in 3.20, but the timing didn’t pan out in the end, wayland-protocols 1.3 included a tablet v1 protocol that went unused, till now. Now that we’re in 3.22, those bits are at last being merged. First came gtk+, which you can see working in these videos:

This improved tablet support brings some goodies (also on X11), there is a new GdkDeviceTool object that can used to track specific physical tools and provides the necessary info to do so in a persistent way (eg. across runs/reboots).

A note to all GTK+ app/widget developers

TL;DR: If you’re keeping track of motion events somewhere in your app, you want to check gdk_event_get_device(), see the bullet points below and check the best strategy for you.

One particularity in wayland tablet management is that whenever you have a tool in proximity to a tablet you’ll be driving a standalone onscreen pointer cursor, as opposed to the default behavior in X11 consisting of driving the same pointer than your mouse does. This difference is not moot, it doesn’t take too much thinking to realize that the user of a drawing tablet has a spare hand which could be using the mouse, so there’s in practical terms two pointers to react to. And there’s also setups where more than one drawing tablet is likely (the simplest one being a laptop with an integrated tablet plus a external one).

GTK+ is itself quite well prepared for these situations, as are all its widgets, it can’t account for widgets implementations elsewhere and signal callbacks in apps though.

But I’m not doing a drawing app, should I care?

Yes you should. Although tablets are mostly thought out for drawing, design, etc., ideally they should be able to drive the whole desktop without oddities, just as your mouse does. This can only drive to a pleasant experience with your app regardless of the input devices of choice.

If there’s somewhere in your app where you’re tracking pointer events over time (drawing is obviously an example, but so is rubberband selection, any kind of drag-to-move, etc…), there’s a chance that it could get confused by events with different gdk_event_get_device()s coming to the widget in question. There’s several things you might do:

  • First and foremost, look into GtkGesture and its several subclasses, it’s handled transparently there, and there’s a good chance that one of those (or a group) will fit your case
  • Maybe it’s sufficient a “first to come gets served” basis for the specific purposes of your operation? You can use gtk_device_grab_add/remove with that device, or check gdk_event_get_device() yourself on all events.
  • Think it’d be cool to handle the simultaneous pointers and want to go the extra mile? Again, track gdk_event_get_device(), and keep per-device state.

…And there’s more to come

There’s big plans for mutter in this cycle so tablet support will not be immediately merged, but be assured it will happen in due time. Gears are also grinding in order to have a version 2 of the tablet protocol, it will mostly focus on bridging pads (the sets of buttons and sliders along the sides of some tablets) to wayland clients, something which I expect to be an outstanding improvement compared to X11, so… stay tuned!