Planeta GNOME Hispano
La actividad Hispana de GNOME 24 x 7

08 de July de 2020

​Chromium now migrated to the new C++ Mojo types

At the end of the last year I wrote a long blog post summarizing the main work I was involved with as part of Igalia’s Chromium team. In it I mentioned that a big chunk of my time was spent working on the migration to the new C++ Mojo types across the entire codebase of Chromium, in the context of the Onion Soup 2.0 project.

For those of you who don’t know what Mojo is about, there is extensive information about it in Chromium’s documentation, but for the sake of this post, let’s simplify things and say that Mojo is a modern replacement to Chromium’s legacy IPC APIs which enables a better, simpler and more direct way of communication among all of Chromium’s different processes.

One interesting thing about this conversion is that, even though Mojo was already “the new thing” compared to Chromium’s legacy IPC APIs, the original Mojo API presented a few problems that could only be fixed with a newer API. This is the main reason that motivated this migration, since the new Mojo API fixed those issues by providing less confusing and less error-prone types, as well as additional checks that would force your code to be safer than before, and all this done in a binary compatible way. Please check out the Mojo Bindings Conversion Cheatsheet for more details on what exactly those conversions would be about.

Another interesting aspect of this conversion is that, unfortunately, it wouldn’t be as easy as running a “search & replace” operation since in most cases deeper changes would need to be done to make sure that the migration wouldn’t break neither existing tests nor production code. This is the reason why we often had to write bigger refactorings than what one would have anticipated for some of those migrations, or why sometimes some patches took a bit longer to get landed as they would span way too much across multiple directories, making the merging process extra challenging.

Now combine all this with the fact that we were confronted with about 5000 instances of the old types in the Chromium codebase when we started, spanning across nearly every single subdirectory of the project, and you’ll probably understand why this was a massive feat that would took quite some time to tackle.

Turns out, though, that after just 6 months since we started working on this and more than 1100 patches landed upstream, our team managed to have nearly all the existing uses of the old APIs migrated to the new ones, reaching to a point where, by the end of December 2019, we had completed 99.21% of the entire migration! That is, we basically had almost everything migrated back then and the only part we were missing was the migration of //components/arc, as I already announced in this blog back in December and in the chromium-mojo mailing list.


Progress of migrations to the new Mojo syntax by December 2019

This was good news indeed. But the fact that we didn’t manage to reach 100% was still a bit of a pain point because, as Kentaro Hara mentioned in the chromium-mojo mailing list yesterday, “finishing 100% is very important because refactoring projects that started but didn’t finish leave a lot of tech debt in the code base”. And surely we didn’t want to leave the project unfinished, so we kept collaborating with the Chromium community in order to finish the job.

The main problem with //components/arc was that, as explained in the bug where we tracked that particular subtask, we couldn’t migrate it yet because the external libchrome repository was still relying on the old types! Thus, even though almost nothing else in Chromium was using them at that point, migrating those .mojom files under //components/arc to the new types would basically break libchrome, which wouldn’t have a recent enough version of Mojo to understand them (and no, according to the people collaborating with us on this effort at that particular moment, getting Mojo updated to a new version in libchrome was not really a possibility).

So, in order to fix this situation, we collaborated closely with the people maintaining the libchrome repository (external to Chromium’s repository and still relies in the old mojo types) to get the remaining migration, inside //components/arc, unblocked. And after a few months doing some small changes here and there to provide the libchrome folks with the tools they’d need to allow them to proceed with the migration, they could finally integrate the necessary changes that would ultimately allow us to complete the task.

Once this important piece of the puzzle was in place, all that was left was for my colleague Abhijeet to land the CL that would migrate most of //components/arc to the new types (a CL which had been put on hold for about 6 months!), and then to land a few CLs more on top to make sure we did get rid of any trace of old types that might still be in codebase (special kudos to my colleague Gyuyoung, who wrote most of those final CLs).


Progress of migrations to the new Mojo syntax by July 2020

After all this effort, which would sit on top of all the amazing work that my team had already done in the second half of 2019, we finally reached the point where we are today, when we can proudly and loudly announce that the migration of the old C++ Mojo types to the new ones is finally complete! Please feel free to check out the details on the spreadsheet tracking this effort.

So please join me in celebrating this important milestone for the Chromium project and enjoy the new codebase free of the old Mojo types. It’s been difficult but it definitely pays off to see it completed, something which wouldn’t have been possible without all the people who contributed along the way with comments, patches, reviews and any other type of feedback. Thank you all! 👌 ðŸ�»

IgaliaLast, while the main topic of this post is to celebrate the unblocking of these last migrations we had left since December 2019, I’d like to finish acknowledging the work of all my colleagues from Igalia who worked along with me on this task since we started, one year ago. That is, Abhijeet, Antonio, Gyuyoung, Henrique, Julie and Shin.

Now if you’ll excuse me, we need to get back to working on the Onion Soup 2.0 project because we’re not done yet: at the moment we’re mostly focused on converting remote calls using Chromium’s legacy IPC to Mojo (see the status report by Dave Tapuska) and helping finish Onion Soup’ing the remaining directores under //content/renderer (see the status report by Kentaro Hara), so there’s no time to waste. But those migrations will be material for another post, of course.

16 de May de 2020

Now I am a member of the The Document Foundation

The Document Foundation logo

Well, it took me some months to tell it here but The Document Foundation has honored me admitting my membership request:

Dear Ismael Olea,

We are pleased to inform you that, as of 2020-01-01, your membership has been officially filed. You are now acknowledged as member of The Document Foundation according to its bylaws.

Kind Regards,

The Document Foundation Membership Committee

Little things that makes you a bit happier. Thank you!

14 de May de 2020

The Web Platform Tests project

Web Browsers and Test Driven Development

Working on Web browsers development is not an easy feat but if there’s something I’m personally very grateful for when it comes to collaborating with this kind of software projects, it is their testing infrastructure and the peace of mind that it provides me with when making changes on a daily basis.

To help you understand the size of these projects, they involve millions of lines of code (Chromium is ~25 million lines of code, followed closely by Firefox and WebKit) and around 200-300 new patches landing everyday. Try to imagine, for one second, how we could make changes if we didn’t have such testing infrastructure. It would basically be utter and complete chao​s and, more especially, it would mean extremely buggy Web browsers, broken implementations of the Web Platform and tens (hundreds?) of new bugs and crashes piling up every day… not a good thing at all for Web browsers, which are these days some of the most widely used applications (and not just ‘the thing you use to browse the Web’).

The Chromium Trybots in action
The Chromium Trybots in action

Now, there are all different types of tests that Web engines run automatically on a regular basis: Unit tests for checking that APIs work as expected, platform-specific tests to make sure that your software runs correctly in different environments, performance tests to help browsers keep being fast and without increasing too much their memory footprint… and then, of course, there are the tests to make sure that the Web engines at the core of these projects implement the Web Platform correctly according to the numerous standards and specifications available.

And it’s here where I would like to bring your attention with this post because, when it comes to these last kind of tests (what we call “Web tests” or “layout tests”), each Web engine used to rely entirely on their own set of Web tests to make sure that they implemented the many different specifications correctly.

Clearly, there was some room for improvement here. It would be wonderful if we could have an engine-independent set of tests to test that a given implementation of the Web Platform works as expected, wouldn’t it? We could use that across different engines to make sure not only that they work as expected, but also that they also behave exactly in the same way, and therefore give Web developers confidence on that they can rely on the different specifications without having to implement engine-specific quirks.

Enter the Web Platform Tests project

Good news is that just such an ideal thing exists. It’s called the Web Platform Tests project. As it is concisely described in it’s official site:

“The web-platform-tests project is a cross-browser test suite for the Web-platform stack. Writing tests in a way that allows them to be run in all browsers gives browser projects confidence that they are shipping software which is compatible with other implementations, and that later implementations will be compatible with their implementations.”

I’d recommend visiting its website if you’re interested in the topic, watching the “Introduction to the web-platform-tests” video or even glance at the git repository containing all the tests here. Here, you can also find specific information such as how to run WPTs or how to write them. Also, you can have a look as well at the wpt.fyi dashboard to get a sense of what tests exists and how some of the main browsers are doing.

In short: I think it would be safe to say that this project is critical to the health of the whole Web Platform, and ultimately to Web developers. What’s very, very surprising is how long it took to get to where it is, since it came into being only about halfway into the history of the Web (there were earlier testing efforts at the W3C, but none that focused on automated & shared testing). But regardless of that, this is an interesting challenge: Filling in all of the missing unified tests, while new things are being added all the time!

Luckily, this was a challenge that did indeed took off and all the major Web engines can now proudly say that they are regularly running about 36500 of these Web engine-independent tests (providing ~1.7 million sub-tests in total), and all the engines are showing off a pass rate between 91% and 98%. See the numbers below, as extracted from today’s WPT data:

Chrome 84 Edge 84 Firefox 78 Safari 105 preview
Pass Total Pass Total Pass Total Pass Total
1680105 1714711 1669977 1714195 1640985 1698418 1543625 1695743
Pass rate: 97.98% Pass rate: 97.42% Pass rate: 96.62% Pass rate: 91.03%

And here at Igalia, we’ve recently had the opportunity to work on this for a little while and so I’d like to write a bit about that…

Upstreaming Chromium’s tests during the Coronavirus Outbreak

As you all know, we’re in the middle of an unprecedented world-wide crisis that is affecting everyone in one way or another. One particular consequence of it in the context of the Chromium project is that Chromium releases were paused for a while. On top of this, some constraints on what could be landed upstream were put in place to guarantee quality and stability of the Chromium platform during this strange period we’re going through these days.

These particular constraints impacted my team in that we couldn’t really keep working on the tasks we were working on up to that point, in the context of the Chromium project. Our involvement with the Blink Onion Soup 2.0 project usually requires the landing of relatively large refactors, and these kind of changes were forbidden for the time being.

Fortunately, we found an opportunity to collaborate in the meantime with the Web Platform Tests project by analyzing and trying to upstream many of the existing Chromium-specific tests that haven’t yet been unified. This is important because tests exist for widely used specifications, but if they aren’t in Web Platform Tests, their utility and benefits are limited to Chromium. If done well, this would mean that all of the tests that we managed to upstream would be immediately available for everyone else too. Firefox and WebKit-based browsers would not only be able to identify missing features and bugs, but also be provided with an extra set of tests to check that they were implementing these features correctly, and interoperably.

The WPT Dashboard
The WPT Dashboard

It was an interesting challenge considering that we had to switch very quickly from writing C++ code around the IPC layers of Chromium to analyzing, migrating and upstreaming Web tests from the huge pool of Chromium tests. We focused mainly on CSS Grid Layout, Flexbox, Masking and Filters related tests… but I think the results were quite good in the end:

As of today, I’m happy to report that, during the ~4 weeks we worked on this my team migrated 240 Chromium-specific Web tests to the Web Platform Tests’ upstream repository, helping increase test coverage in other Web Engines and thus helping towards improving interoperability among browsers:

  • CSS Flexbox: 89 tests migrated
  • CSS Filters: 44 tests migrated
  • CSS Masking: 13 tests migrated
  • CSS Grid Layout: 94 tests migrated

But there is more to this than just numbers. Ultimately, as I said before, these migrations should help identifying missing features and bugs in other Web engines, and that was precisely the case here. You can easily see this by checking the list of automatically created bugs in Firefox’s bugzilla, as well as some of the bugs filed in WebKit’s bugzilla during the time we worked on this.

…and note that this doesn’t even include the additional 96 Chromium-specific tests that we analyzed but determined were not yet eligible for migrating to WPT (normally because they relied on some internal Chromium API or non-standard behaviour), which would require further work to get them upstreamed. But that was a bit out of scope for those few weeks we could work on this, so we decided to focus on upstreaming the rest of tests instead.

Personally, I think this was a big win for the Web Platform and I’m very proud and happy to have had an opportunity to have contributed to it during these dark times we’re living, as part of my job at Igalia. Now I’m back to working on the Blink Onion Soup 2.0 project, where I think I should write about too, but that’s a topic for a different blog post.

Credit where credit is due

IgaliaI wouldn’t want to finish off this blog post without acknowledging all the different contributors who tirelessly worked on this effort to help improve the Web Platform by providing the WPT project with these many tests more, so here it is:

From the Igalia side, my whole team was the one which took on this challenge, that is: Abhijeet, Antonio, Gyuyoung, Henrique, Julie, Shin and myself. Kudos everyone!

And from the reviewing side, many people chimed in but I’d like to thank in particular the following persons, who were deeply involved with the whole effort from beginning to end regardless of their affiliation: Christian Biesinger, David Grogan, Robert Ma, Stephen Chenney, Fredrik Söderquist, Manuel Rego Casasnovas and Javier Fernandez. Many thanks to all of you!

Take care and stay safe!

09 de May de 2020

Tracker 2.99.1 and miners released

TL;DR: $TITLE, and a call for distributors to make it easily available in stable distros, more about that at the bottom.

Sometime this week (or last, depending how you count), Tracker 2.99.1 was released. Sam has been doing a fantastic series of blog posts documenting the progress. With my blogging frequency I’m far from stealing his thunder :), I will still add some retrospect here to highlight how important of a milestone this is.

First of all, let’s give an idea of the magnitude of the changes so far:


[carlos@irma tracker]$ git diff origin/tracker-2.3..origin/master --stat -- docs examples src tests utils |tail -n 1
788 files changed, 20475 insertions(+), 66384 deletions(-)

[carlos@irma tracker-miners]$ git diff origin/tracker-miners-2.3..origin/master --stat -- data docs src tests | tail -n 1
354 files changed, 39422 insertions(+), 6027 deletions(-)

What did happen there? A little more than half of the insertions in tracker-miners (and corresponding deletions in tracker) can be attributed to code from libtracker-miner, libtracker-control and corresponding tests moving to tracker-miners. Those libraries are no longer public, but given those are either unused or easily replaceable, that’s not even the most notable change :).

The changes globally could be described as “things falling in place”, Tracker got more cohesive, versatile and tested than it ever was, we put a lot of care and attention to detail, and we hope you like the result. Let’s break down the highlights.

Understanding SPARQL

Sometime a couple years ago, I got fed up after several failed attempts at implementing support for property paths, this wound up into a rewrite of the SPARQL parser. This was part of Tracker 2.2.0 and brought its own benefits, ancient history.

Getting to the point, having the expression tree in the new parser closely modeled after SPARQL 1.1 grammar definition helped getting a perfect snapshot of what we don’t do, what we don’t do correctly and what we do extra. The parser was made to accept all correct SPARQL, and we had an `_unimplemented()` define in place to error out when interpreting the expression tree.

But that also gave me something to grep through and sigh, this turned into many further reads of SPARQL 1.1 specs, and a number of ideas about how to tackle them. Or if we weren’t restricted by compatibility concerns, as for some things we were limited by our own database structure.

Fast forward to today, the define is gone. Tracker covers the SPARQL 1.1 language in its entirety, warts and everything. The spec is from 2013, we just got there 7 years late :). Most notably, there’s:

  • Graphs: In a triple store, the aptly named triples consist of subject/predicate/object, and they belong within graphs. The object may point to elements in other graphs.

    In prior versions, we “supported graphs” in the language, but those were more a property of the triple’s object. This changes the semantics slightly in appearance but in fundamental ways, eg. no two graphs may have the same triple, and the ownership of the triple is backwards if subject and object are in different graphs.

    Now the implementation of graphs perfectly matches the description, and becomes a good isolated unit to let access in the case of sandboxing.

    We also support the ADD/MOVE/CLEAR/LOAD/DROP wholesome operations on graphs, to ease their management.

  • Services: The SERVICE syntax allows to federate portions of your query graph pattern to external services, and operate transparently on that data as if local. This is not exactly new in Tracker 2.99.x, but now supports dbus services in addition to http ones. More notes about why this is key further down.
  • New query forms, DESCRIBE/CONSTRUCT: This syntax sits alongside SELECT. DESCRIBE is a simple form to get RDF triples fully describing a resource, CONSTRUCT is a more powerful data extraction clause that allows serializing arbitrary portions of the triple set, even all of it, and even across RDF schemas.

Of all 11 documents from the SPARQL recommendations, we are essentially missing support for HTTP endpoints to entirely pass for a SPARQL 1.1 store. We obviously don’t mean to compete wrt enterprise-level databases, but we are completionists and will get to implementing the full recommendations someday :).

There is no central store

The tracker-store service got stripped of everything that makes it special. You were already able to create private stores, making those public via DBus is now one API call away. And its simple DBus API to perform/restore backups is now superseded by CONSTRUCT and LOAD syntax.

We have essentially democratized triple stores, in this picture (and a sandboxed world) it does not make sense to have a singleton default one, so the tracker-store process itself is no more. Each miner (Filesystem, RSS) has its own store, made public on its DBus name. TrackerSparqlConnection constructors let you specifically create a local store, or connect to a specific DBus/HTTP service.

No central service? New paradigm!

Did you use to store/modify data in tracker-store? There’s some bad news: It’s no longer for you to do that, scram from our lawn!

You are still much welcome to create your own private store, there you can do as you please, even rolling something else than Nepomuk.

But wait, how can you keep your own store and still consume data indexed by tracker miners? Here comes the SERVICE syntax to play, allowing you to deal with miner data and your own altogether. A simple hypothetical example:

# Query favorite files
SELECT ?u {
  SERVICE <dbus:org.freedesktop.Tracker3.Miner.Files> {
    ?u a nfo:FileDataObject
  }
  ?u mylocaldata:isFavorite true
}

As per the grammar definition, the SERVICE syntax can only be used from Query forms, not Update ones. This is essentially the language conspiring to keep a clear ownership model, where other services are not yours to modify.

If you are only interested in accessing one service, you can use tracker_sparql_connection_bus_new and perform queries directly to the remote service.

A web presence

It’s all about appearance these days, that’s why newscasters don’t switch the half of the suit they wear. A long time ago, we used to have the tracker-project.org domain, the domain expired and eventually got squatted.

That normally sucks on itself, for us it was a bit of a pickle, as RDF (and our own ontologies) stands largely on URIs, that means live software producing links out of our control, and it going to pastes/bugs/forums all over the internet. Luckily for us, tracker-project.org is a terrible choice of name for a porn site.

We couldn’t simply do the change either, in many regards those links were ABI. With 3.x on the way, ABI was no longer a problem, Sam did things properly so we have a site, and a proper repository of ontologies.

Nepomuk is dead, long live Nepomuk

Nepomuk is a dead project. Despite its site being currently alive, it’s been dead for extended periods of time over the last 2 years. That’s 11.5M EUR of your european taxpayer money slowly fading away.

We no longer consider we should consider it “an upstream”, so we have decided to go our own. After some minor sanitization and URI rewriting, the Nepomuk ontology is preserved mostly as-is, under our own control.

But remember, Nepomuk is just our “reference” ontology, a swiss army knife for whatever a might need to be stored in a desktop. You can always roll your own.

Tracker-miner-fs data layout

For sandboxing to be any useful, there must be some actual data separation. The tracker-miner-fs service now stores things in several graphs:

  • tracker:FileSystem
  • tracker:Audio
  • tracker:Video
  • tracker:Documents
  • tracker:Software

And commits further to the separation between “Data Objects” (e.g. files) and “Information Elements” (e.g. what its content represents). Both aspects of a “file” still reference each other, but simply used to be the same previously.

The tracker:FileSystem graph is the backbone of file system data, it contains all file Data Objects, and folders. All other graphs store the related Information Elements (eg. a song in a flac file).

Resources are interconnected between graphs, depending on the graphs you have access to, you will get a partial (yet coherent) view of the data.

CLI improvements

We have been doing some changes around our CLI tools, with tracker shifting its scope to being a good SPARQL triple store, the base set of CLI tools revolves around that, and can be seen as an equivalent to sqlite3 CLI command.

We also have some SPARQL specific sugar, like tracker endpoint that lets you create transient SPARQL services.

All miner-specific subcommands, or those that relied implicitly on their details did move to the tracker-miners repo, the tracker3 command is extensible to allow this.

Documentation

In case this was not clear, we want to be a general purpose data storage solution. We did spend quite some time improving and extending the developer and ontology documentation, adding migration notes… there’s even an incipient SPARQL tutorial!

There is a sneak preview of the API documentation at our site. It’s nice being able to tell that again!

Better tests

Tracker additionally ships a small helper python library to make it easy writing tests against Tracker infrastructure. There’s many new and deeper tests all over the place, e.g. around new syntax support.

Up next…

You’ve seen some talk about sandboxing, but nothing about sandboxing itself. That’s right, support for it is in a branch and will probably be part of 2.99.2. Now the path is paved for it to be transparent.

We currently are starting the race to update users. Sam got some nice progress on nautilus, and I just got started at shaving a yak on a cricket.

The porting is not completely straightforward. With few nice exceptions, a good amount of the tracker code around is stuck in some time frozen “as long as it works”, cargo-culted state. This sounds like a good opportunity to modernize queries, and introduce the usage of compiled statements. We are optimist that we’ll get most major players ported in time, and made 3.x able to install and run in parallel in case we miss the goal.

A call to application developers

We are no longer just “that indexer thingy”. If you need to store data with more depth than a table. If you missed your database design and relational algebra classes, or don’t miss them at all. We’ve got to talk :), come visit us at #tracker.

A call to distributors

We made tracker and tracker-miners 3.x able to install and run in parallel to tracker 2.x, and we expect users to get updated to it over time.

Given that it will get reflected in nightly flatpaks, and Tracker miners are host services, we recommend that tracker3 development releases are made available or easy to install in current stable distribution releases. Early testers and ourselves will thank you.

04 de May de 2020

Freeplane now published at Flathub

As I advanced in April the Freeplane mind mapping software is now accepted and published at Flathub: https://flathub.org/apps/details/org.freeplane.App.

Flathub screenshot.

Freeplane is a Mind Mapping, Knowledge Management, Project Management. Develop, organize and communicate your ideas and knowledge in the most effective way.

Freeplane is a fork of Freemind and it is in active development. Now it’s ready for install in any Linux system with just point’n’click through, for example, GNOME Software or any other flatpak compatible software installation manager.

GNOME Software screenshot.

Enjoy!

24 de April de 2020

Cuáles son todas las localizaciones (l10n) de la lengua española

Un repaso esquemático en beneficio de Recursos Lingüìsticos Abiertos del Español (RLA-ES) y su corrector ortográfico.

Hace tiempo me interesé por cuál sería realmente el funcionamiento de los sistemas de localización de software, en particular los usados en el mundo Linux y software libre. En particular por su relación con el corrector ortográfico para el español que mantiene el proyecto RLA-Es desde hace años y que es el más usado universalmente en nuestro mundillo en toda clase de sistema operativo. Empecé a recopilar datos pero realmente no fueron más que una madeja inconexa que ahora me propongo organizar.

Espero ser tan sintético como claro.

Cómo se identifican los códigos regionales de lenguas

Si bien han sido de uso normas como ISO 639 e ISO 3166 mi conclusión es que la referencia mundial hoy día es el Unicode Common Locale Data Repository (CLDR), mantenida por el Unicode Consortium y de uso generalizado.

The Unicode CLDR provides key building blocks for software to support the world’s languages, with the largest and most extensive standard repository of locale data available. This data is used by a wide spectrum of companies for their software internationalization and localization, adapting software to the conventions of different languages for such common software tasks.

El CLDR, que usa como herramienta de trabajo un repositorio en Github, publica revisiones del mismo, la última es la CLDR 37 que están disponibles para descarga pública.

datos de publicación de CLDR v7

Gran parte del mérito del impacto del CLDR en el software actual es porque está implementado en las bibliotecas de software ICU, disponiles para C, C++ y Java, publicadas con licencia libre y usadas por todo el mundo y por toda clase de plataformas software. ICU se mantiene desde http://icu-project.org/. Es muy probable que en este mismo momento estés usando aplicaciones que implementan CLDR porque usan las bibliotecas ICU.

Los datos especificados son enormes y exhaustivos para cubrir todas las lenguas registradas. En nuestro caso sólo atenderemos referencias al español y sus variantes.

A su vez el CLDR implementa normas de las que veremos algún breve detalle:

IETF BCP 47 Language Tags

BCP 47 es la especificación de códigos de lenguas usada por CLDR. Está compuesta por dos RFC concatenados:

y aplicando el IANA Language Subtag Registry.

Al parecer se considera más exacta y práctica que las definiciones de ISO 639 e ISO 3166.

Language Subtag Registry - IANA

Es una lista que establece un código (que denominan subtag) por cada lengua registrada.

Puede examinarse el contenido completo del registro de subetiquetas de lenguas: IANA Language Subtag Registry

Seleccionamos las relacionadas con el español:

%%
Type: language
Subtag: es
Description: Spanish
Description: Castilian
Added: 2005-10-16
Suppress-Script: Latn
%%
Type: language
Subtag: osp
Description: Old Spanish
Added: 2009-07-29
%%
%%
Type: language
Subtag: spq
Description: Loreto-Ucayali Spanish
Added: 2009-07-29
%%
Type: variant
Subtag: spanglis
Description: Spanglish
Added: 2017-02-23
Prefix: en
Prefix: es
Comments: A variety of contact dialects of English and Spanish
%%
Type: redundant
Tag: es-419
Description: Latin American Spanish
Added: 2005-07-15

En el corrector ortográfico RLA-ES estamos usando:

%%
Type: language
Subtag: es
Description: Spanish
Description: Castilian
Added: 2005-10-16
Suppress-Script: Latn

Y descubro otra subtag que podría ser de interés:

%%
Type: redundant
Tag: es-419
Description: Latin American Spanish
Added: 2005-07-15

Consideraciones:

  • Hay más registros relacionados con diferentes lenguas de signos hispanas que están fuera del alcance del corrector.
  • También hay otros registros de lenguas de países hispánicos diferentes al español que también están fuera del alcance.

M49 Standard

¿Cuál es el origen del código 419? Está definido en el Standard country or area codes for statistical use (M49) y asignado a la región geográfica Latinoamérica. Exactamente:

América Latina y el Caribe 419

No sé cuál es la necesidad práctica de una variante sencillamente porque no la he investigado, pero teniendo en cuenta que está definida como Type: redundant no parece que sea extremadamente prioritaria.

Unicode Common Locale Data Repository (CLDR)

Para nuestro caso examinaremos el contenido del paquete cldr-common-37.0.zip.

Nuestro objetivo es identificar exactamente todos los códigos l10n de lengua española reconocidos para diferentes territorios:

$ ls core/common/main/es*
common/main/es_419.xml	common/main/es_HN.xml
common/main/es_AR.xml	common/main/es_IC.xml
common/main/es_BO.xml	common/main/es_MX.xml
common/main/es_BR.xml	common/main/es_NI.xml
common/main/es_BZ.xml	common/main/es_PA.xml
common/main/es_CL.xml	common/main/es_PE.xml
common/main/es_CO.xml	common/main/es_PH.xml
common/main/es_CR.xml	common/main/es_PR.xml
common/main/es_CU.xml	common/main/es_PY.xml
common/main/es_DO.xml	common/main/es_SV.xml
common/main/es_EA.xml	common/main/es_US.xml
common/main/es_EC.xml	common/main/es_UY.xml
common/main/es_ES.xml	common/main/es_VE.xml
common/main/es_GQ.xml	common/main/es.xml
common/main/es_GT.xml

No vamos a examinar los contenidos, pero cualquiera interesado puede descargarlos y examinarlos.

La lista exacta de códigos se puede obtener fácilmente:

$ ls  common/main/es* | sed "s/.*\///g"|sed "s/.xml//g"|sort
es
es_419
es_AR
es_BO
es_BR
es_BZ
es_CL
es_CO
es_CR
es_CU
es_DO
es_EA
es_EC
es_ES
es_GQ
es_GT
es_HN
es_IC
es_MX
es_NI
es_PA
es_PE
es_PH
es_PR
es_PY
es_SV
es_US
es_UY
es_VE

Para obtener exactamente la lista, en español, de etiquetas de regiones podemos usar:

$ for a in `ls  common/main/es* | sed "s/.*es_*//g"|sed "s/.xml//g"|sort` ; \
     do grep "<territory type=\"$a\">" common/main/es.xml ;done
subetiqueta región
419 Latinoamérica
AR Argentina
BO Bolivia
BR Brasil
BZ Belice
CL Chile
CO Colombia
CR Costa Rica
CU Cuba
DO República Dominicana
EA Ceuta y Melilla
EC Ecuador
ES España
GQ Guinea Ecuatorial
GT Guatemala
HN Honduras
IC Canarias
MX México
NI Nicaragua
PA Panamá
PE Perú
PH Filipinas
PR Puerto Rico
PY Paraguay
SV El Salvador
US Estados Unidos
UY Uruguay
VE Venezuela

Complitud del corrector RLA-ES

Cuál es, en comparación, el estado de complitud del corrector ortográfico RLA-ES:

subetiqueta región hunspell-es
es_419 Latinoamérica
es_AR Argentina
es_BO Bolivia
es_BR Brasil
es_BZ Belice
es_CL Chile
es_CO Colombia
es_CR Costa Rica
es_CU Cuba
es_DO República Dominicana
es_EA Ceuta y Melilla
es_EC Ecuador
es_ES España
es_GQ Guinea Ecuatorial
es_GT Guatemala
es_HN Honduras
es_IC Canarias
es_MX México
es_NI Nicaragua
es_PA Panamá
es_PE Perú
es_PH Filipinas
es_PR Puerto Rico
es_PY Paraguay
es_SV El Salvador
es_US Estados Unidos
es_UY Uruguay
es_VE Venezuela
es genérico ✔ codificado como es-ANY

Vista la cantidad de regiones cubiertas podemos decir que el estado del proyecto es bastante satisfactorio. Algunos comentarios:

  • Sobre la cobertura de regiones parte de España no apreciamos ninguna necesidad de preparar diccioanrios para «Ceuta y Melilla» y «Canarias», puesto que en el último caso deberían estar cubiertas por es_ES.
  • En el caso de «Brasil» y «Belice»… aquí no somos capaces de formular una opinión. El español no es lengua oficial en ambos países y no tenemos ni idea de qué cantidad de población significativa podría necesitar de una versión localizada del corrector.
  • En cambio en «Guinea Ecuatorial» el español sí es lengua oficial. Sería la única nación-estado con lengua oficial para la que no disponemos de corrector localizado.
  • El caso de la región «419» podría ser interesante de ser discutida con los miembros del proyecto para saber si sería conveniente algún cambio o acción particular o si los productos actuales son suficientes.

Siguientes pasos

Una vez que están claras cuáles son las etiquetas de lenguas normalizadas parece apropiado verificar la compatibilidad de al menos las aplicaciones más populares. Veremos a ver.

22 de April de 2020

Recent activity on Flatpak: video workshop and new bundles

Past days have been very flatpaky to me and productive enough for creating some new material.

A Flatpak getting started video tutorial

Joining my mates at HackLab Almería, who took the initiative of a set of video talks supporting the #YoMeQuedoEnCasa initiative fighting against the boredom COVID-19 confinement in Spain (thanks Víctor), I felt ready enough to give a talk about guerrilla Flatpak packaging using as examples my work on recent bundles:

Talk is in Spanish. If interested you can ask or comment at the HLA forum entry. The recording quality is not good enough: it’s the first time I record a talk with the amazing OBS application. It’s not a great work but it does the job.

JClic published at Flathub

I’m very happy to announce the JClic educative opensource application it’s now published at Flathub! This is relevant because the Clic project has been developed for more than two decades, it’s being used by teachers of all the world and has more than 2500 educative activities published.

desktop screenshot mobile screenshot

Other Flatpak bundles in the works

As I’m getting experience with Flatpak and java apps I’m preparing other ones I think they deserve to be really widespread:

Hope all of these will be published at Flathub at some moment.

Enjoy!

13 de April de 2020

OmegaT packaged with Flatpak

OmegaT logo

It took me a while since I announced I retired OmegaT from Fedora and my first try to deploy it with Flatpak and, I hope, Flathub. There were some reasons:

  • I was completely new to Flatpak,
  • the big amount of work to write the manifest of a non trivial application compiled completely from scratch,
  • the complexity at that time of working with Java and Flatpak.

Well, it has been time to resume the task. I’m not really an OmegaT user (I’ve used it just a few times in all these years) but I feel committed helping it to be better known and used in the Linux Desktop. In the past I made a lot of work with technical translations and I fully understand the power of the tool for profesional users and it’s opensource: as far as I know OmegaT is the best CAT (Computer Assisted Translation) opensource tool in the world.

So I’m here again and did some work for a final product. Today the work is a lot of easier because the existence of the OpenJDK Flatpak extension (see this update from Mat Booth). Thanks for this extension! It’s so nice I’m finishing other two java programs to be published at Flathub: Freeplane and JClic. For this release series I’m taking the easy way of packaging the binary portable bundle instead of compiling from sources because… it’s tedious. I am concerned it’s not the best practice. And this time I’ve started from a more recent version beta 5.2.0.

I invite you to give it a try and provide some feedback if any. Some details:

Remember when you install a Flatpak package it is fully self-contained, you don’t need to install anything more (in this case the JRE is included in the bundle) and it’s isolated from the rest of the desktop so you’ll be able to keep an install indefinitely without worrying it will broke on any operating system update.

Remember too you could install this bundle in any modern Linux desktop. The only requirement is to have Flatpak installed, as more recents versions of Linux are.

Next steps:

  • finishing a final manifest to propose to Flathub in a couple days or so, which will be, as far as I am concerned, the main publishing site of the Flatpak bundle; when in Flathub any user could install it pointing and clicking as in any other app store;
  • getting accepted at upstream my minor contributions, basicaly metadata;
  • extending the bundle with some of the most popular OmegaT extensions and dictionaries;
  • and eventually rewrite the manifest for compiling from sources.

Hope you’ll find it useful.

PD: fixed how to install the package from CLI and changed a couple times the downloading URL :-/

03 de April de 2020

Mapas fronterizos de España y sus comunidades autónomas

Hace unos días expliqué un método para descargar mapas fronterizos a partir de OpenStreetMap. El interés era crear unos mapas de las comunidades autónomas españolas que una compañera de WMES necesitaba para sus clases en la universidad. Así que una vez que he sabido cómo he aprovechado para prepararlos.

Aquí pues está disponible el mapa de bordes fronterizos de España y sus comunidades autónomas para vuestro uso y disfrute.

Los datos están licenciados según las indicaciones de copyright de OSM:

© Colaboradores de OpenStreetMap
Los datos están disponibles bajo la licencia Open Database License (ODbL).

A partir de ahí podéis disponerlo con toda libertad.

Cómo usar el mapa

Se pretende que casi cualquier persona con conocimientos básicos de ofimática pueda usarlos. Una rápida explicación de cómo:

  • El mapa está compuesto por varias capas: una por cada autonomía y ciudad autónoma y otro con los bordes fronterizos de España.
  • Observaréis que las fronteras del país incluyen una parte marítima: son los límites legales de las aguas territoriales.
  • Para vuestro proyecto hay que abrir el fichero completo, con todas las capas. Si sólo necesitáis alguna de las capas podéis desactivar o eliminar las demás.
  • El fichero está en formato KMZ y debería poder abrirse con casi cualquier software que trabaje con mapas. En este caso ha sido compuesto con Google Earth Pro, software multiplataforma disponible gratuitamente.
  • ¿Puedo usar el mapa en Google Maps? En teoría sí porque técnicamente es compatible pero el tamaño es mayor del permitido, así que no puedes importar el mapa completo. Pero si sólo necesitas parte del contenido basta editarlo con aplicación, por ejemplo el mencionado Google Earth, y exportar solamente las capas precisas a un fichero y a su vez importarlo a un mapa en My Maps.

Obviamente si trabajas directamente con datos de OSM no necesitarás este mapa.

Disfrutadlo con salud.

30 de March de 2020

How to create border maps for your projects

When working with data projects it is usual to use administrative maps. In my experience is not trivial to find the cartographic files as open access or opensource data sources, so after some search I found a method to create an ad-hoc map for any administrative region coded into OpenStreetMap. It’s not a trivial method but it is no as complex as it seems at first sight. I’ll try to introduce the essential concepts to easy understand the recipe. If you know other methods as good or better than this please give me some feedback.

I used this method with geodata for Spain so I guess it works with any other administrative region coded in OSM.

First you need to know an OpenStreetMap concept: the relation. In our case we’ll use multipolygon relations, used to code the borders of areas of our interests. The important thing to remember here is you are going to use an OSM relation.

Second you’ll want to select the region of your interest and you’ll need to figure out how it has been mapped in OSM. So you need to find the related OSM relation. As example I’ll use Alamedilla, my parents’ town in the province of Granada, Spain.

the method

Go to https://www.openstreetmap.org and search for the region of your interest. For example Alamedilla:

example screenshot

Click to the correct place and you’ll see something like this:

example screenshot

Look at the URL box at the browser and you’ll see something like this: https://www.openstreetmap.org/relation/343442. The code number you need for the next steps is that one in the URL after the relation keyword. In this example is 343442.

Then visit the to overpass turbo service, a powerful web-based data query and filtering tool for OpenStreetMap:

example screenshot

The white box at left is where you write the code of your query for Overpass. You have a wizard tool in the menu but it’s not trivial too. Instead you could copy exactly this code:


[out:json][timeout:2500];
(
    relation(343442)({{bbox}});
);
out body;
>;
out skel qt;

example screenshot

In your case you need to change the 349044 number (used for the Alamedilla’s example) with the relation number you got before. If you modify the query keep in mind the default timeout (25) maybe is not enough for your case.

Now, clicking the Run button you’ll execute your query. Keep in mind the resulting data set could be really big, depending how big the area is.

So, here it is:

example screenshot

Zoom the map to have a better view:

example screenshot

Now you’ll find the resulting data set in GeoJSON format ready at the Data tab (right side). If this format is fine for you you are done. But if you need some other you are lucky enough because when clicking into Export button you’ll find some other formats to export: GPX, KML and OSM data.

In this example we’ll use the KML format used by Google Earth, Maps and many others.

example screenshot

importing into Google Earth

Open Google Earth:

example screenshot

and open our kml file: [File][Open]:

example screenshot

and here it is:

example screenshot

Note: I modified the color (at the object properties) to make it more visible in the screenshot.

So, it is done. Now you can use the kml file in your application, import to any GIS software or convert to another format if required.

importing into Google Maps

Go to Google MyMaps and create a new one. Import a new layer an select your kml file:

example screenshot

Here it is:

example screenshot

conclusion

Now you are able to create maps of any region added into OpenStreetMap, export them to any of the said formats and import into your applications. Hope this helps.

If you finally use data from the OSM project remember to add the correct credits:

We require that you use the credit “© OpenStreetMap contributors”.

See credit details at osm.org/copyright.

This is an example of how AWESOME OpenStreetMap is the and extraordinaire work these people does. Big thanks to all of the contributros for these impressive service.

08 de February de 2020

Xamarin forks and whatnots

Busy days in geewallet world! I just released version 0.4.2.198 which brings some interesting fixes, but I wanted to talk about the internal work that had to be done for each of them, in case you're interested.
  • In Linux(GTK), cold storage mode when pairing was broken, because the absence of internet connection was not being detected properly. The bug was in a 3rd-party nuget library we were using: Xam.Plugin.Connectivity. But we couldn't migrate to Xamarin.Essentials for this feature because Xamarin.Essentials lacks support for some platforms that we already supported (not officially, but we know geewallet worked on them even if we haven't released binaries/packages for all of them yet). The solution? We forked Xamarin.Essentials to include support for these platforms (macOS and Linux), fixed the bug in our fork, and published our fork in nuget under the name `DotNetEssentials`. Whenever Xamarin.Essentials starts supporting these platforms, we will stop using our fork.
  • The clipboard functionality in geewallet depended on another 3rd-party nuget library: Xamarin.Plugins.Clipboard. The GTK bits of this were actually contributed by me to their github repository as a Pull Request some time ago, so we just packaged the same code to include it in our new DotNetEssentials fork. One dependency less to care about!
  • Xamarin.Forms had a strange bug that caused some buttons sometimes to not be re-enabled. This bug has been fixed by one of our developers and its fix was included in the new pre-release of Xamarin.Forms 4.5, so we have upgraded geewallet to use this new version instead of v4.3.
Last but not least, I wanted to mention something not strictly related to this new release. We got accepted in GNOME's gitlab, so now the canonical place to find our repository is here:


Technically speaking, this will allow us to drop GithubActions for our non-Linux CI lanes (we were already happy with our gitlab-ci recipes before we had to use GH! only limitation was that GitLab.com's free tier CI minutes for agents was Linux-only).

But in general we're just happy to be hosted by infrastructure from one of the most important opensource projects in the world. Special thanks go to Carlos Soriano and Andrea Veri for helping us with the migration.

PS: Apologies if the previous blogpost to this one shows up in planets again, as it might be a side-effect of updating its links to point to the new git repo!

Introducing geewallet

Version 0.4.2.187 of geewallet has just been published to the snap store! You can install it by looking for its name in the store or by installing it from the command line with `snap install geewallet`. It features a very simplistic and minimalistic UI/UX. Nothing very fancy, especially because it has a single codebase that targets many (potential) platforms, e.g. you can also find it in the Android App Store.

What was my motivation to create geewallet in the first place, around 2 years ago? Well, I was very excited about the “global computing platform” that Ethereum was promising. At the time, I thought it would be like the best replacement of Namecoin: decentralised naming system, but not just focusing on this aspect, but just bringing Turing-completeness so that you can build whatever you want on top of it, not just a key-value store. So then, I got ahold of some ethers to play with the platform. But by then, I didn’t find any wallet that I liked, especially when considering security. Most people were copy+pasting their private keys into a website (!) called MyEtherWallet. Not only this idea was terrifying (since you had to trust not just the security skills of the sysadmin who was in charge of the domain&server, but also that the developers of the software don’t turn rogue…), it was even worse than that, it was worse than using a normal hot wallet. And what I wanted was actually a cold wallet, a wallet that could run in an offline device, to make sure hacking it would be impossible (not faraday-cage-impossible, but reasonably impossible).

So there I did it, I created my own wallet.

After some weeks, I added bitcoin support on it thanks to the library NBitcoin (good work Nicholas!). After some months, I added a cross-platform UI besides the first archaic command-line frontend. These days it looks like this:



What was my motivation to make geewallet a brain wallet? Well, at the time (and maybe nowadays too, before I unveil this project at least), the only decent brain wallet out there that seemed sufficiently secure (against brute force attacks) was WarpWallet, from the Keybase company. If you don’t believe in their approach, they even have placed a bounty in a decently small passphrase (so if you ever think that this kind of wallet would be hacked, you would be certainly safe to think that any cracker would target this bounty first, before thinking of you). The worst of it, again, was that to be able to use it you had again to use a web interface, so you had the double-trust problem again. Now geewallet brings the same WarpWallet seed generation algorithm (backed by unit tests of course) but on a desktop/mobile approach, so that you can own the hardware where the seed is generated. No need to write anymore long seeds of random words in pieces of paper: your mind is the limit! (And of course geewallet will warn the user in case the passphrase is too short and simple: it even detects if all the words belong to the dictionary, to deter low entropy, from the human perspective.)

Why did I add support for Litecoin and Ethereum Classic to the wallet? First, let me tell you that bitcoin and ethereum, as technological innovations and network effects, are very difficult to beat. And in fact, I’m not a fan of the proliferation of dubious portrayed awesome new coins/tokens that claim to be as efficient and scalable as these first two. They would need not only to beat the network effect when it comes to users, but also developers (all the best cryptographers are working in Bitcoin and Ethereum technologies). However, Litecoin and Ethereum-Classic are so similar to Bitcoin and Ethereum, respectively, that adding support for them was less than a day’s work. And they are not completely irrelevant: Litecoin may bring zero-knowledge proofs in an upcoming update soon (plus, its fees are lower today, so it’s an alternative cheaper testnet with real value); and Ethereum-Classic has some inherent characteristics that may make it more decentralised than Ethereum in the long run (governance not following any cult of personality, plus it will remain as a Turing-complete platform on top of Proof Of Work, instead of switching to Proof of Stake; to understand why this is important, I recommend you to watch this video).

Another good reason of why I started something like this from scratch is because I wanted to use F# in a real open source project. I had been playing with it for a personal (private) project 2 years before starting this one, so I wanted to show the world that you can build a decent desktop app with simple and not too opinionated/academic functional programming. It reuses all the power of the .NET platform: you get debuggers, you can target mobile devices, you get immutability by default; all three in one, in this decade, at last. (BTW, everything is written in F#, even the build scripts.)

What’s the roadmap of geewallet? The most important topics I want to cover shortly are three:
  • Make it even more user friendly: blockchain addresses are akin to the numeric IP addresses of the early 80s when DNS still didn’t exist. We plan to use either ENS or IPNS or BNS or OpenCAP so that people can identify recipients much more easily.
  • Implement Layer2 technologies: we’re already past the proof of concept phase. We have branches that can open channels. The promise of these technologies is instantaneous transactions (no waits!) and ridiculous (if not free) fees.
  • Switch the GTK Xamarin.Forms driver to work with the new “GtkSharp” binding under the hood, which doesn’t require glue libraries. (I’ve had quite a few nightmares with native dependencies/libs when building the sandboxed snap package!)
With less priority:
  • Integrate with some Rust projects: MimbleWimble(Grin) lib, the distributed COMIT project for trustless atomic swaps, or other Layer2-related ones such as rust-lightning.
  • Cryptography work: threshold keys or deniable encryption (think "duress" passwords).
  • NFC support (find recipients without QR codes!).
  • Tizen support (watches!).
  • Acceptance testing via UI Selenium tests (look up the Uno Platform).

Areas where I would love contributions from the community:
  • Flatpak support: unfortunately I haven’t had time to look at this sandboxing technology, but it shouldn’t be too hard to do, especially considering that there’s already a Mono-based project that supports it: SparkleShare.
  • Ubuntu packaging: there’s a patch blocked on some Ubuntu bug that makes the wallet (or any .NET app these days, as it affects the .NET package manager: nuget) not build in Ubuntu 19.10. If this patch is not merged soon, the next LTS of Ubuntu will have this bug :( As far as I understand, what needs to be solved is this issue so that the latest hotfixes are bundled. (BTW I have to thank Timotheus Pokorra, the person in charge to package Mono in Fedora, for his help on this matter so far.)
  • GNOME community: I’m in search for a home for this project. I don’t like that it lives in my GitLab username, because it’s not easy to find. One of the reasons I’ve used GitLab is because I love the fact that being open source, many communities are adopting this infrastructure, like Debian and GNOME. That’s why I’ve used as a bug tracker, for merge requests and to run CI jobs. This means that it should be easy to migrate to GNOME’s GitLab, isn’t it? There are unmaintained projects (e.g. banshee, which I couldn’t continue maintaining due to changes in life priorities...) already hosted there, so maybe it’s not much to ask if I could host a maintained one? It's probably the first Gtk-based wallet out there.

And just in case I wasn't clear:
  • Please don’t ask me to add support for your favourite %coin% or <token>.

  • If you want to contribute, don’t ask me what to work on, just think of your personal itch you want to scratch and discuss it with me filing a GitLab issue. If you’re a C# developer, I wrote a quick F# tutorial for you.
  • Thanks for reading up until here! It’s my pleasure to write about this project.


  • I'm excited about the world of private-key management. I think we can do much better than what we have today: most people think of hardware wallets to be unhackable or cold storage, but most of them are used via USB or Bluetooth! Which means they are not actually cold storage, so software wallets with offline-support (also called air-gapped) are more secure! I think that eventually these tools will even merge with other ubiquitous tools with which we’re more familiar today: password managers!

    You can follow the project on twitter (yes I promise I will start using this platform to publish updates).

    PS: If you're still not convinced about these technologies or if you didn't understand that PoW video I posted earlier, I recommend you to go back to basics by watching this other video produced by a mathematician educator which explains it really well.

    WORA-WNLF


    I started my career writing web applications. I had struggles with PHP web-frameworks, javascript libraries, and rendering differences (CSS and non-CSS glitches) across browsers. After leaving that world, I started focusing more on the backend side of things, fleeing from the frontend camp (mainly actually just scared of that abomination that was javascript; because, in my spare time, I still did things with frontends: I hacked on a GTK media player called Banshee and a GTK chat app called Smuxi).

    So there you had me: a backend dev by day, desktop dev by night. But in the GTK world I had similar struggles as the ones I had as a frontend dev when the browsers wouldn’t behave in the same way. I’m talking about GTK bugs in other non-Linux OSs, i.e. Mac and Windows.

    See, I wanted to bring a desktop app to the masses, but these problems (and others of different kinds) prevented me to do it. And while all this was happening, another major shift was happening as well: desktop environments were fading while mobile (and not so mobile: tablets!) platforms were rising in usage. This meant yet more platforms that I wished GTK supported. As I’m not a C language expert (nor I wanted to be), I kept googling for the terms “gtk” and “android” or “gtk” and “iOS”, to see if some hacker put something together that I could use. But that day never happened.

    Plus, I started noticing a trend: big companies with important mobile apps started to stop using HTML5 within their apps in favour of native apps, mainly chasing the “native look & feel”. This meant, clearly, that even if someone cooked a hack that made gtk+ run in Android, it would still feel foreign, and nobody would dare to use it.

    So I started to become a fan of abstraction layers that were a common denominator of different native toolkits and kept their native look&feel. For example, XWT, the widget toolkit that Mono uses in MonoDevelop to target all 3 toolkits depending on the platform: Cocoa (on macOS), Gtk (on Linux) and WPF (on Windows). Pretty cool hack if you ask me. But using this would contradict my desires of using a toolkit that would already support Android!

    And there it was Xamarin.Forms, an abstraction layer between iOS, Android and WindowsPhone, but that didn’t support desktops. Plus, at the time, Xamarin was proprietary (and I didn’t want to get out of my open source world). It was a big dilemma.

    But then, some years passed, and many events happened around Xamarin.Forms:
    • Xamarin (the company) was bought by Microsoft and, at the same time, Xamarin (the product) was open sourced.
    • Xamarin.Forms is opensource now (TBH not sure if it was proprietary before, or it was always opensource).
    • Xamarin.Forms started supporting macOS and Windows UWP.
    • Xamarin.Forms 3.0 included support for GTK and WPF.

    So that was the last straw that made me switch completely all my desktop efforts toward Xamarin.Forms. Not only I can still target Linux+GTK (my favorite platform), I can also make my apps run in mobile platforms, and desktop OSs that most people use. So both my niche and mainstream covered! But this is not the end: Xamarin.Forms has been recently ported to Tizen too! (A Linux-based OS used by Samsung in SmartTVs and watches.)

    Now let me ask you something. Do you know of any graphical toolkit that allows you to target 6 different platforms with the same codebase? I repeat: Linux(GTK), Windows(UWP/WPF), macOS, iOS, Android, Tizen. The old Java saying is finally here! (but for the frontend side): “write once, run anywhere” (WORA) to which I add “with native look’n’feel” (WORA-WNLF)

    If you want to know who is the hero that made the GTK driver of Xamarin.Forms, follow @jsuarezruiz which BTW has been recently hired by Microsoft to work on their non-Windows IDE ;-)

    PS: If you like .NET and GTK, my employer is also hiring! (remote positions might be available too) ping me 

    23 de December de 2019

    End of the year Update: 2019 edition

    It’s the end of December and it seems that yet another year has gone by, so I figured that I’d write an EOY update to summarize my main work at Igalia as part of our Chromium team, as my humble attempt to make up for the lack of posts in this blog during this year.

    I did quit a few things this year, but for the purpose of this blog post I’ll focus on what I consider the most relevant ones: work on the Servicification and the Blink Onion Soup projects, the migration to the new Mojo APIs and the BrowserInterfaceBroker, as well as a summary of the conferences I attended, both as a regular attendee and a speaker.

    But enough of an introduction, let’s dive now into the gory details…

    Servicification: migration to the Identity service

    As explained in my previous post from January, I’ve started this year working on the Chromium Servicification (s13n) Project. More specifically, I joined my team mates in helping with the migration to the Identity service by updating consumers of several classes from the sign-in component to ensure they now use the new IdentityManager API instead of directly accessing those other lower level APIs.

    This was important because at some point the Identity Service will run in a separate process, and a precondition for that to happen is that all access to sign-in related functionality would have to go through the IdentityManager, so that other process can communicate with it directly via Mojo interfaces exposed by the Identity service.

    I’ve already talked long enough in my previous post, so please take a look in there if you want to know more details on what that work was exactly about.

    The Blink Onion Soup project

    Interestingly enough, a bit after finishing up working on the Identity service, our team dived deep into helping with another Chromium project that shared at least one of the goals of the s13n project: to improve the health of Chromium’s massive codebase. The project is code-named Blink Onion Soup and its main goal is, as described in the original design document from 2015, to “simplify the codebase, empower developers to implement features that run faster, and remove hurdles for developers interfacing with the rest of the Chromium”. There’s also a nice slide deck from 2016’s BlinkOn 6 that explains the idea in a more visual way, if you’re interested.


    “Layers”, by Robert Couse-Baker (CC BY 2.0)

    In a nutshell, the main idea is to simplify the codebase by removing/reducing the several layers of located between Chromium and Blink that were necessary back in the day, before Blink was forked out of WebKit, to support different embedders with their particular needs (e.g. Epiphany, Chromium, Safari…). Those layers made sense back then but these days Blink’s only embedder is Chromium’s content module, which is the module that Chrome and other Chromium-based browsers embed to leverage Chromium’s implementation of the Web Platform, and also where the multi-process and sandboxing architecture is implemented.

    And in order to implement the multi-process model, the content module is split in two main parts running in separate processes, which communicate among each other over IPC mechanisms: //content/browser, which represents the “browser process” that you embed in your application via the Content API, and //content/renderer, which represents the “renderer process” that internally runs the web engine’s logic, that is, Blink.

    With this in mind, the initial version of the Blink Onion Soup project (aka “Onion Soup 1.0”) project was born about 4 years ago and the folks spearheading this proposal started working on a 3-way plan to implement their vision, which can be summarized as follows:

    1. Migrate usage of Chromium’s legacy IPC to the new IPC mechanism called Mojo.
    2. Move as much functionality as possible from //content/renderer down into Blink itself.
    3. Slim down Blink’s public APIs by removing classes/enums unused outside of Blink.

    Three clear steps, but definitely not easy ones as you can imagine. First of all, if we were to remove levels of indirection between //content/renderer and Blink as well as to slim down Blink’s public APIs as much as possible, a precondition for that would be to allow direct communication between the browser process and Blink itself, right?

    In other words, if you need your browser process to communicate with Blink for some specific purpose (e.g. reacting in a visual way to a Push Notification), it would certainly be sub-optimal to have something like this:

    …and yet that is what would happen if we kept using Chromium’s legacy IPC which, unlike Mojo, doesn’t allow us to communicate with Blink directly from //content/browser, meaning that we’d need to go first through //content/renderer and then navigate through different layers to move between there and Blink itself.

    In contrast, using Mojo would allow us to have Blink implement those remote services internally and then publicly declare the relevant Mojo interfaces so that other processes can interact with them without going through extra layers. Thus, doing that kind of migration would ultimately allow us to end up with something like this:

    …which looks nicer indeed, since now it is possible to communicate directly with Blink, where the remote service would be implemented (either in its core or in a module). Besides, it would no longer be necessary to consume Blink’s public API from //content/renderer, nor the other way around, enabling us to remove some code.

    However, we can’t simply ignore some stuff that lives in //content/renderer implementing part of the original logic so, before we can get to the lovely simplification shown above, we would likely need to move some logic from //content/renderer right into Blink, which is what the second bullet point of the list above is about. Unfortunately, this is not always possible but, whenever it is an option, the job here would be to figure out what of that logic in //content/renderer is really needed and then figure out how to move it into Blink, likely removing some code along the way.

    This particular step is what we commonly call “Onion Soup’ing //content/renderer/<feature>(not entirely sure “Onion Soup” is a verb in English, though…) and this is for instance how things looked before (left) and after (right) Onion Souping a feature I worked on myself: Chromium’s implementation of the Push API:


    Onion Soup’ing //content/renderer/push_messaging

    Note how the whole design got quite simplified moving from the left to the right side? Well, that’s because some abstract classes declared in Blink’s public API and implemented in //content/renderer (e.g. WebPushProvider, WebPushMessagingClient) are no longer needed now that those implementations got moved into Blink (i.e. PushProvider and PushMessagingClient), meaning that we can now finally remove them.

    Of course, there were also cases where we found some public APIs in Blink that were not used anywhere, as well as cases where they were only being used inside of Blink itself, perhaps because nobody noticed when that happened at some point in the past due to some other refactoring. In those cases the task was easier, as we would just remove them from the public API, if completely unused, or move them into Blink if still needed there, so that they are no longer exposed to a content module that no longer cares about that.

    Now, trying to provide a high-level overview of what our team “Onion Soup’ed” this year, I think I can say with confidence that we migrated (or helped migrate) more than 10 different modules like the one I mentioned above, such as android/, appcache/, media/stream/, media/webrtc, push_messaging/ and webdatabase/, among others. You can see the full list with all the modules migrated during the lifetime of this project in the spreadsheet tracking the Onion Soup efforts.

    In my particular case, I “Onion Soup’ed” the PushMessagingWebDatabase and SurroundingText features, which was a fairly complete exercise as it involved working on all the 3 bullet points: migrating to Mojo, moving logic from //content/renderer to Blink and removing unused classes from Blink’s public API.

    And as for slimming down Blink’s public API, I can tell that we helped get to a point where more than 125 classes/enums were removed from that Blink’s public APIs, simplifying and reducing the Chromium code- base along the way, as you can check in this other spreadsheet that tracked that particular piece of work.

    But we’re not done yet! While overall progress for the Onion Soup 1.0 project is around 90% right now, there are still a few more modules that require “Onion Soup’ing”, among which we’ll be tackling media/ (already WIP) and accessibility/ (starting in 2020), so there’s quite some more work to be done on that regard.

    Also, there is a newer design document for the so-called Onion Soup 2.0 project that contains some tasks that we have been already working on for a while, such as “Finish Onion Soup 1.0”, “Slim down Blink public APIs”, “Switch Mojo to new syntax” and “Convert legacy IPC in //content to Mojo”, so definitely not done yet. Good news here, though: some of those tasks are already quite advanced already, and in the particular case of the migration to the new Mojo syntax it’s nearly done by now, which is precisely what I’m talking about next…

    Migration to the new Mojo APIs and the BrowserInterfaceBroker

    Along with working on “Onion Soup’ing” some features, a big chunk of my time this year went also into this other task from the Onion Soup 2.0 project, where I was lucky enough again not to be alone, but accompanied by several of my team mates from Igalia‘s Chromium team.

    This was a massive task where we worked hard to migrate all of Chromium’s codebase to the new Mojo APIs that were introduced a few months back, with the idea of getting Blink updated first and then having everything else migrated by the end of the year.


    Progress of migrations to the new Mojo syntax: June 1st – Dec 23rd, 2019

    But first things first: you might be wondering what was wrong with the “old” Mojo APIs since, after all, Mojo is the new thing we were migrating to from Chromium’s legacy API, right?

    Well, as it turns out, the previous APIs had a few problems that were causing some confusion due to not providing the most intuitive type names (e.g. what is an InterfacePtrInfo anyway?), as well as being quite error-prone since the old types were not as strict as the new ones enforcing certain conditions that should not happen (e.g. trying to bind an already-bound endpoint shouldn’t be allowed). In the Mojo Bindings Conversion Cheatsheet you can find an exhaustive list of cases that needed to be considered, in case you want to know more details about these type of migrations.

    Now, as a consequence of this additional complexity, the task wouldn’t be as simple as a “search & replace” operation because, while moving from old to new code, it would often be necessary to fix situations where the old code was working fine just because it was relying on some constraints not being checked. And if you top that up with the fact that there were, literally, thousands of lines in the Chromium codebase using the old types, then you’ll see why this was a massive task to take on.

    Fortunately, after a few months of hard work done by our Chromium team, we can proudly say that we have nearly finished this task, which involved more than 1100 patches landed upstream after combining the patches that migrated the types inside Blink (see bug 978694) with those that tackled the rest of the Chromium repository (see bug 955171).

    And by “nearly finished” I mean an overall progress of 99.21% according to the Migration to new mojo types spreadsheet where we track this effort, where Blink and //content have been fully migrated, and all the other directories, aggregated together, are at 98.64%, not bad!

    On this regard, I’ve been also sending a bi-weekly status report mail to the chromium-mojo and platform-architecture-dev mailing lists for a while (see the latest report here), so make sure to subscribe there if you’re interested, even though those reports might not last much longer!

    Now, back with our feet on the ground, the main roadblock at the moment preventing us from reaching 100% is //components/arc, whose migration needs to be agreed with the folks maintaining a copy of Chromium’s ARC mojo files for Android and ChromeOS. This is currently under discussion (see chromium-mojo ML and bug 1035484) and so I’m confident it will be something we’ll hopefully be able to achieve early next year.

    Finally, and still related to this Mojo migrations, my colleague Shin and I took a “little detour” while working on this migration and focused for a while in the more specific task of migrating uses of Chromium’s InterfaceProvider to the new BrowserInterfaceBroker class. And while this was not a task as massive as the other migration, it was also very important because, besides fixing some problems inherent to the old InterfaceProvider API, it also blocked the migration to the new mojo types as InterfaceProvider did usually rely on the old types!


    Architecture of the BrowserInterfaceBroker

    Good news here as well, though: after having the two of us working on this task for a few weeks, we can proudly say that, today, we have finished all the 132 migrations that were needed and are now in the process of doing some after-the-job cleanup operations that will remove even more code from the repository! \o/

    Attendance to conferences

    This year was particularly busy for me in terms of conferences, as I did travel to a few events both as an attendee and a speaker. So, here’s a summary about that as well:

    As usual, I started the year attending one of my favourite conferences of the year by going to FOSDEM 2019 in Brussels. And even though I didn’t have any talk to present in there, I did enjoy my visit like every year I go there. Being able to meet so many people and being able to attend such an impressive amount of interesting talks over the weekend while having some beers and chocolate is always great!

    Next stop was Toronto, Canada, where I attended BlinkOn 10 on April 9th & 10th. I was honoured to have a chance to present a summary of the contributions that Igalia made to the Chromium Open Source project in the 12 months before the event, which was a rewarding experience but also quite an intense one, because it was a lightning talk and I had to go through all the ~10 slides in a bit under 3 minutes! Slides are here and there is also a video of the talk, in case you want to check how crazy that was.

    Took a bit of a rest from conferences over the summer and then attended, also as usual, the Web Engines Hackfest that we at Igalia have been organising every single year since 2009. Didn’t have a presentation this time, but still it was a blast to attend it once again as an Igalian and celebrate the hackfest’s 10th anniversary sharing knowledge and experiences with the people who attended this year’s edition.

    Finally, I attended two conferences in the Bay Area by mid November: first one was the Chrome Dev Summit 2019 in San Francisco on Nov 11-12, and the second one was BlinkOn 11 in Sunnyvale on Nov 14-15. It was my first time at the Chrome Dev Summit and I have to say I was fairly impressed by the event, how it was organised and the quality of the talks in there. It was also great for me, as a browsers developer, to see first hand what are the things web developers are more & less excited about, what’s coming next… and to get to meet people I would have never had a chance to meet in other events.

    As for BlinkOn 11, I presented a 30 min talk about our work on the Onion Soup project, the Mojo migrations and improving Chromium’s code health in general, along with my colleague Antonio Gomes. It was basically a “extended” version of this post where we went not only through the tasks I was personally involved with, but also talked about other tasks that other members of our team worked on during this year, which include way many other things! Feel free to check out the slides here, as well as the video of the talk.

    Wrapping Up

    As you might have guessed, 2019 has been a pretty exciting and busy year for me work-wise, but the most interesting bit in my opinion is that what I mentioned here was just the tip of the iceberg… many other things happened in the personal side of things, starting with the fact that this was the year that we consolidated our return to Spain after 6 years living abroad, for instance.

    Also, and getting back to work-related stuff here again, this year I also became accepted back at Igalia‘s Assembly after having re-joined this amazing company back in September 2018 after a 6-year “gap” living and working in the UK which, besides being something I was very excited and happy about, also brought some more responsibilities onto my plate, as it’s natural.

    Last, I can’t finish this post without being explicitly grateful for all the people I got to interact with during this year, both at work and outside, which made my life easier and nicer at so many different levels. To all of you,  cheers!

    And to everyone else reading this… happy holidays and happy new year in advance!

    08 de December de 2019

    «Chris stared out of the window», a theological tale

    My English class. You should write an story starting with 🙶Chris stared out of the window waiting for the phone to ring🙷. Let’s do it.


    Chris stared out of the window waiting for the phone to ring. He looked into the void while his mind wandered. Time is passing, but it is not. For an eternal being all time is present, but not always is time present. The past, the future are just states of mind for an overlord. But he is still waiting for the phone to ring. Time is coming. The decision was made. It had always been made, before time existed indeed. Chris knows all the details of the plan. He knows because he is God too. He knows because he conceived it. No matter if he had been waiting for the starting signal. No matter if he expects the Holly Spirit to announce it to him. You can can call it protocol. He knows because he had decided how to do it. But Chris doubts. He is God. He is Holly Spirit. But he has been human too. The remembrance of his humanity brings him to a controversial state of mind. Now he doubts. He has been always doubting since the world is the world and before the existence of time. And after too, because he is an eternal being. He now relives the feelings of being a human. He relives all the feelings to be all the humans. He revisits joy and pain. Joy is good, sure. But joy is is nothing special for an overlord god. But pain… pain matters for a human — and Chris has been a human. Chris knows. Chris feels. Chris understands how sad human life can be. Chris knows because he is the Father creator. He created humans. He knows how mediocrity drives the character of all the creatures privileged with consciousness. A poisoned gift. He knows how evil is always just an expression of insecurities, the lack of certainty of what will happen tomorrow. What will happen just the next second. 🙶Will I be alive? Will be the pain still exists?🙷. He knows because he feels. And he doubts because he feels. He feels it can’t be fair to judge, to condemn, to punish a living being because it was created in that way. This would not be the full of love god the Evangelists announced. How could he punish for a sin he was the cause of. But, if not, can it be fair to all the others who behave according the Word. All of those, maybe with love, maybe will pain, maybe just for selfishness, fulfilled God’s proposal of virtue and goodness. How can not distinguish their works, to award their efforts. How can it be fair. How can he be good. His is the power and the glory. He is in doubt. The phone rings.

    20 de November de 2019

    Some GNOME / LAS / Wikimedia love

    For some time to know I’ve dedicating some more time to Wikimedia related activities. I love to share this time with other opensource communities I’m related to. This post is just to write down a list of items/resources I’ve created related with events in this domain.

    Wikidata

    If you don’t know about Wikidata probably you’ll look at it because it’ll be the most important linked data corpora in the world. In the future we will use WD as the cornerstone of many applications. Remember you read this here first.

    About GUADEC:

    About GNOME.Asia:

    About LAS 2019:

    And about the previous LAS format:

    Wikimedia Commons

    Wikimedia Commons is my current favorite place to publish pictures with open licensing these days. To me is the ideal place to publish reusable resources with explicit Creative Commons open licensing. And you can contribute with your own media without intermediaries.

    About GUADEC:

    About GNOME.Asia:

    About LAS:

    Epilogue

    As you can check the list is not complete neither all items are fully described. I invite you to complete all the information you want. For Wikidata there are many places where ask help. And for WikiCommons you can help uploading your own pictures. If you have doubts just use current examples as references or ask me directly.

    Linux Applications Summit 2019 activity

    Here it is a summary about my activities at the past Linux App Summit this month in Barcelona.

    My first goal has been spreading the word about the Indico conference management system. This was in part done with a lightning talk. Sadly I couldn’t show my slides but here they are for your convenience. Anyhow they are not particularly relevant.

    Also @KristiProgri asked me to help taking pictures of the event. I’m probably the worst photographer in the world and I don’t have special interest in photography as a hobby but for some months I’m trying to take documentary pictures supposedly relevant for the Wikimedia projects. The good thing is seems I’m getting better, specially since I changed to a new smartphone with it’s making magic with my pictures. Y just use a mere Moto G7+ smartphone but it’s making me really happy with the results, exceeding any of my expectations. Just to say I found the standard camera application doesn’t work well for me when photographing moving targets but I’m doing better indeed with the wonderful Open Camera Android opensource application.

    I uploaded my pictures to Category:Linux App Summit 2019. Please consider to add yours to the same category.

    Related with this I added items to Wikidata too.

    Also helped a bit sharing pics in Twitter #LinuxAppSummit:

    And finally, I helped the local team with some minor tasks like moving items and so.

    I want to congratulate all the organization team and specially the local team for the results and the love they have put in the event. The results have been excellent and this is another strong step for the interweaved relations between opensource development communities sharing very near goals.

    My participation at the conference has been sponsored by the GNOME Foundation. Thanks very much for their support.

    05 de November de 2019

    Congress/Conference organization tasks list draft

    Well, when checking my blog looking for references about resources related with conferences organization I’ve found I had any link to this thing I compiled two years ago (!!??). So this post is fixing it.

    After organizing a couple national and international conferences I compiled a set of tasks useful as an skeleton for you next conference. The list is not absolutely exhaustive neither strictly formal but it’s complete enough to be, I think, accurate and useful. In its current this task list is published at tree.taiga.io as Congress/Conference organization draft: «a simplified skeleton of a kanban project for the organization of conferences. It’s is specialized in technical and opensource activities based in real experience».

    I think the resource is still valid and useful. So feel free to use it and provide feedback.

    task list screenshot

    Now, thinking aloud, and considering my crush with EPF Composer I seriously think I should model the tasks with it as an SPEM method and publish both sources and website. And, hopefully, create tools for creating project drafts in well known tools (Gitlab, Taiga itself, etc). Reach me if you are interested too :-)

    Enjoy!

    24 de October de 2019

    VCR to WebM with GStreamer and hardware encoding

    My family had bought many years ago a Panasonic VHS video camera and we had recorded quite a lot of things, holidays, some local shows, etc. I even got paid 5000 pesetas (30€ more than 20 years ago) a couple of times to record weddings in a amateur way. Since my father passed less than a year ago I have wanted to convert those VHS tapes into something that can survive better technologically speaking.

    For the job I bought a USB 2.0 dongle and connected it to a VHS VCR through a SCART to RCA cable.

    The dongle creates a V4L2 device for video and is detected by Pulseaudio for audio. As I want to see what I am converting live I need to tee both audio and video to the corresponding sinks and the other part would go to to the encoders, muxer and filesink. The command line for that would be:

    gst-launch-1.0 matroskamux name=mux ! filesink location=/tmp/test.webm \
    v4l2src device=/dev/video2 norm=255 io-mode=mmap ! queue ! vaapipostproc ! tee name=video_t ! \
    queue ! vaapivp9enc rate-control=4 bitrate=1536 ! mux.video_0 \
    video_t. ! queue ! xvimagesink \
    pulsesrc device=alsa_input.usb-MACROSIL_AV_TO_USB2.0-02.analog-stereo ! 'audio/x-raw,rate=48000,channels=2' ! tee name=audio_t ! \
    queue ! pulsesink \
    audio_t. ! queue ! vorbisenc ! mux.audio_0

    As you can see I convert to WebM with VP9 and Vorbis. Something interesting can be passing norm=255 to the v4l2src element so it’s capturing PAL and the rate-control=4 for VBR to the vaapivp9enc element, otherwise it will use cqp as default and file size would end up being huge.

    You can see the pipeline, which is beatiful, here:

    As you can see, we’re using vaapivp9enc here which is hardware enabled and having this pipeline running in my computer was consuming more or less 20% of CPU with the CPU absolutely relaxed, leaving me the necessary computing power for my daily work. This would not be possible without GStreamer and GStreamer VAAPI plugins, which is what happens with other solutions whose instructions you can find online.

    If for some reason you can’t find vaapivp9enc in Debian, you should know there are a couple of packages for the intel drivers and that the one you should install is intel-media-va-driver. Thanks go to my colleague at Igalia Víctor Jáquez, who maintains gstreamer-vaapi and helped me solving this problem.

    My workflow for this was converting all tapes into WebM and then cutting them in the different relevant pieces with PiTiVi running GStreamer Editing Services both co-maintained by my colleague at Igalia, Thibault Saunier.

    22 de October de 2019

    Testing Indico opensource event management software

    Indico event management tool

    After organizing a bunch of conferences in the past years I found some communities had problems choosing a conference management software. One alternative or others had some limitations in one way or another. In the middle I collected a list of opensource alternatives and recently I’m very interested in Indico. This project is created and maintained by the CERN (yes, those guys who invented the WWW too).

    The most interesting reasons for me are:

    Jornadas WMES 2019

    With the help of Franc Rodríguez we set up an Indico testing instance at https://indico.olea.org. This system is ready to be broken so feel free to experiment.

    So this post is an invitation to any opensource community wanting to test the feasiability of Indico for their future events. Please consider to give it an opportunity.

    Here are some items I consider relevant for you:

    And some potential enhancements (not fully check if currently available or not):

    • videoconf alternatives: https://meet.jit.si
    • social networks integration
      • Twitter
      • Mastodon
      • Matrix
    • exports formats
      • pentabarf
      • xcal, etc
    • full GDPR compliance (seems it just need to add the relevant information to your instance)
    • gravatar support
    • integration with SSO used by the respective community (to be honest I didn’t checked the Flask-Multipass features)
    • maybe a easier inviting procedure: sending inviting links to an email for full setup;
    • map integration (OSM and others).

    For your tests you’ll need to register at the site and contact me (look at the botton of this page) to add you as a manager of your community.

    I think it would be awesome for many communities sharing a common software product. Isn’t it?

    PD: Great news, next March CERN will host an Indico meeting!
    PPD: Here you can check a full configured event organized by the Libre Space Foundation people: Open Source CubeSat Workshop 2019.
    PPPD: And now I got your attention check our Congress/Conference organization tasks list. It’s free!

    17 de October de 2019

    Gnome-shell Hackfest 2019 – Day 3

    As promised, some late notes on the 3rd and last day of the gnome-shell hackfest, so yesterday!

    Some highlights from my partial view:

    • We had a mind blowing in depth discussion about the per-crtc frame clocks idea that’s been floating around for a while. What started as “light” before-bedtime conversation the previous night continued the day after straining our neurons in front of a whiteboard. We came out wiser nonetheless, and have a much more concrete idea about how should it work.
    • Georges updated his merge request to replace Cogl structs with graphene ones. This now passes CI and was merged \o/
    • Much patch review happened in place, and some other pretty notable refactors and cleanups were merged.
    • The evening was more rushed than usual, with some people leaving already. The general feeling seemed good!
    • In my personal opinion the outcome was pretty good too. There’s been progress at multiple levels and new ideas sparked, you should look forward to posts from others :). It was also great to put a face to some IRC nicks, and meet again all the familiar ones.

    Kudos to the RevSpace members and especially Hans, without them this hackfest couldn’t have happened.

    16 de October de 2019

    Gnome-shell Hackfest 2019 – Day 2

    Well, we are starting the 3rd and last day of this hackfest… I’ll write about yesterday, which probably means tomorrow I’ll blog about today :).

    Some highlights of what I was able to participate/witness:

    • Roman Gilg of KDE fame came to the hackfest, it was a nice opportunity to discuss mixed DPI densities for X11/Xwayland clients. We first thought about having one server per pixel density, but later on we realized we might not be that far from actually isolating all X11 clients from each other, so why stop there.
    • The conversation drifted into other topics relevant to desktop interoperation. We did discuss about window activation and focus stealing prevention, this is a topic “fixed” in Gnome but in a private protocol. I had already a protocol draft around which was sent today to wayland-devel ML.
    • A plan was devised for what is left of Xwayland-on-demand, and an implementation is in progress.
    • The designers have been doing some exploration and research on how we interact with windows, the overview and the applications menu, and thinking about alternatives. At the end of the day they’ve demoed to us the direction they think we should take.

      I am very much not a designer and I don’t want to spoil their fine work here, so stay tuned for updates from them :).

    • As the social event, we had a very nice BBQ with some hackerspace members. Again kindly organized by Revspace.

    14 de October de 2019

    Gnome-shell Hackfest 2019 – Day 1

    So today kickstarted the gnome-shell hackfest in Leidschendam, the Netherlands.

    There’s a decent number of attendants from multiple parties (Red Hat, Canonical, Endless, Purism, …). We all brought various items and future plans for discussion, and have a number of merge requests in various states to go through. Some exciting keywords are Graphene, YUV, mixed DPI, Xwayland-on-demand, …

    But that is not all! Our finest designers also got together here, and I overheard they are discussing usability of the lock screen between other topics.

    This event wouldn’t have been possible without the Revspace hackerspace people and specially our host Hans de Goede. They kindly provided the venue and necessary material, I am deeply thankful for that.

    As there are various discussions going on simultaneously it’s kind of hard to keep track of everything, but I’ll do my best to report back over this blog. Stay tuned!

    13 de October de 2019

    Jornadas Wikimedia España WMES 2019: Wikitatón de patrimonio inmueble histórico de Andalucía

    Jornadas WMES 2019

    En la última entrada ya mencioné que dirigiré un taller sobre edición con Wikidata en las Jornadas Wikimedia España 2019. Aquí presento los enlaces y referencias que usaremos en el taller. Nos centramos en el caso del patrimonio histórico inmueble andaluz porque llevo un tiempo trabajando con él y estoy familiarizado, pero es extrapolable a cualquier otro ámbito semejante.

    Quiero animar a cualquier interesado a participar sin importar tu experiencia con Wikidata. Creo que merecerá la pena. Lo que sí os ruego, por favor, es que todos traigáis ordenador portátil.

    Referencias oficiales

    Principales servicios Wikimedia de nuestro interés

    Material relacionado en los proyectos Wikimedia:

    Consultas SPARQL a Wikidata relacionadas:

    Otros servicios externos de interés:

    Ejemplos de monumentos

    Usaremos unos ejemplos como material de referencia. Es muy relevante el de la Alhambra porque es la entrada de la guía de Ándalucía con más datos de todo el catálogo, con mucha ventaja.

    Alhambra de Granada

    Puente del Hacho

    Estación de Renfe de Almería

    Agradecimientos

    Jornadas WMES 2019

    Mi asistencia a las jornadas ha sido posible gracias al soporte económico de la asociación Wikimedia España. Desde aquí mi agradecimiento.

    10 de October de 2019

    Next conferences

    Just to say I’m going to a pair of conferences here in Spain:

    At WMES 2019 I will lead a Wikidata workshop about adding historical heritage data, basically repeating the one at esLibre.

    At LAS 2019 I plan to attend to the Flatpak workshops and to call for a BoF for people involved in opensource conference organizations to share experiences and reuse tools.

    Lots of thanks for the Wikimedia España association and GNOME Foundation for their travel sponsorship. Without their help I could not attend both.

    See you in Pamplona and Barcelona.

    01 de October de 2019

    A new time and life next steps

    the opensource symbol

    Since the beginning of my career in 1998 I’ve been related with Linux and opensource in me or other way. From sysadmin I grow to distro making, hardware certification and finally consulting, plus some other added skills. Parallel I developed a personal career in libre software communities and got the privilege to give lots of talks particularly in Spain and Ibero-America. That was a big time. All this stopped in 2011 with the combination of the big economic crisis in Spain and a personal psychological situation. All lead me to go back from Madrid to my home city, Almería, to look for health recovering. Now, after several years here I’m ready to take a new step and reboot my career.

    Not all this time has been wasted. I dedicated lots of hours to a new project which in several senses has been the inverse of the typical practices in opensource communities. Indeed, I’ve tried to apply most of them but instead in the world-wide Internet now with a 100% hyper-local focus. This mean working in the context of a medium-small city (less than 200k inhabitants) with intensive in-person meetings and Internet communications support. Not all the results has been as successful as I pretended, probably because I kept very big expectations; as Antonio Gramsci said «I’m a pessimist because of intelligence, but an optimist because of will» :-) The effort was developed in what we named HackLab Almería and some time ago I wrote a recap about my experience. To me was both an experiment and a recovering therapy.

    That time worked to recover ambitions, a la Gramsci, and to bring relevant important and itinerant events to our nice city, always related with opensource. Retaking the experience of the good-old HispaLinux conferences we were able of hosting a set of extraordinary great technological conferences: from PyConES 2016 to Akademy 2017, GUADEC 2018 and LibreOffice Conference 2019. For some time I thought Almería was the first city to host these three… after I realized Brno did it before! The icing of the cake was the first conference on secure programming in Spain: SuperSEC. I consider all of this a great personal success.

    Forgot to mention I enrolled in a university course too, more as a excuse to work in an area for which I have never found time: information and software methodology modeling. This materializes in my degree project, in advanced development state but not yet finished, around the ISO/IEC 29110 norm and the EPF Composer. I’m giving it a final push in the coming months.

    Now I’m closing this stage to start a new one, with different priorities and goals. First one is to reboot my professional career, so I’m looking for a new job and started a B2 English certification course. I’m resuming my participation in opensource communities —I’ll attend LAS 2019 next November— and hope to contribute with small but not trivial collaborations to several communities. After all I think the most I’ve been doing all these years has been just shepherding the digital commons.

    See you in your recruitment process! ;-)

    PS: this is an Spanish version of this post.

    30 de September de 2019

    Nueva etapa: cambio de época y futuro profesional

    Cambiamos de tercio a más de lo mismo pero de otra manera

    the opensource symbol

    Desde que empecé mi carrera profesional en 1998 casi siempre he estado relacionado con el mundo Linux y el software libre. Si bien no ha sido demasiado brillante tampoco me arrepiento tanto, como se suele decir, de lo que he hecho que de lo que no he hecho y de las oportunidades que hubiera podido explotar. Pero todo cambió en 2011 cuando combinaron la crisis económica española, problemas laborales y, sobre todo, personales que me obligaron a volver desde Madrid a mi ciudad de origen, Almería, para recuperarme. Ha tomado tiempo pero parece que lo hemos conseguido. Fue en esta época cuando surgió lo que acabamos denominando HackLab Almería.

    Personalmente la actividad en el HLA fue un experimento para aplicar el bagaje adquirido en conocimientos y prácticas en comunidades abiertas opensource durante más de 10 años pero, en este caso, con un enfoque totalmente inverso: de comunidades principalmente telemáticas con alcance incluso mundial a la orientación radicalmente _hiperlocal_con obligado e intenso ámbito presencial. En aquel momento tenía mucho tiempo disponible y me volqué en crear contenido, identificar y establecer contactos personales y dinamizar una nueva comunidad que pudiera alcanzar inercia y masa crítica autosostenible. También fue en aquella época que en una puntual visita a Madrid —por entonces mi actividad viajera se había reducido a casi cero— tras una motivadora conversación con ALMO empecé a recuperar ilusión perdida y afán de creación que finalmente cristalizaron en una actividad intensa durante meses que sin ataduras profesionales o económicas también sirvió para recuperar habilidades y cultivar otras nuevas, profundizando en un proyecto alineado con mi experiencia y lo suficientemente interesante para mantener permanentemente mi interés. Tiempo útil como terapia para recuperar autoestima, paz de espíritu y rendimiento intelectual.

    De camino aproveché para reforzar mi prácticas de la ética hacker: durante años he sido un gran diletante con, tal vez, muchas cosas que decir pero con muy poco impacto. Y esa no es una bonita sensación para un narcisista. Así pues decidí esforzarme en hablar menos y hacer más. Del grado de consecución se podría hablar aparte en otro momento aunque en su momento redacté una retrospectiva. También dediqué interés a profundizar en el conocimiento abierto y los procomunes digitales: MusicBrainz, Wiki Commons, Wikidata, OpenStreetMap, etc.

    Por entonces y prácticamente por casualidad se dibujó la oportunidad de traer encuentros tecnológicos importantes —de una manera u otra siempre relacionados con el opensource— a esta mi ciudad, periférica en la periferia y, tal vez, la única isla de España sita en la propia península Ibérica. Si ya tenía experiencia previa promoviendo y colaborando en aquellos congresos HispaLinux el trabajo en PyConES 2016 —gracias Juanlu por la confiaza— fue un salto cualitativo que después se materializó en la celebración de Akademy 2017, GUADEC 2018 y LibreOffice Conference 2019 en Almería. Por algún tiempo pensé que la nuestra sería la primera ciudad en conseguir este triplete… hasta que descubrí que Brno se nos había adelantado :-) Por el camino también inventamos SuperSEC, el primer congreso nacional de programación segura en España.

    Ahora doy por finalizada esta etapa en parte bastante frustrado. No estoy satisfecho con todos los resultados, en particular con el impacto local. Mientras preparaba este artículo había pensado entrar en algunos detalles descriptivos pero… ¿para qué? Quien podría haberse interesado no lo hizo en su momento y a mi aún me dolería entrar en retrospectiva y… finalmente ¿para qué? para ser otro lapso desvanecido en la entropía. Sí que me quedo tranquilo de connciencia porque sé que, mejor o peor, me entregué al máximo.

    Así pues: cambio de tercio. Un 1 de octubre no es mala fecha para hacerlo. Vuelvo a volcarme en desarrollar mi perfil profesional y, atención querido público, busco trabajo. Obviamente cuanto más próximo y relacionado con el mundillo del software libre y anejos mucho mejor. Y es que aún queda muchísimo por hacer para construir la infraestructura digital libre necesaria para una sociedad digital abierta y quiero seguir siendo parte. Al fin y al cabo creo que todo lo que he hecho desde los años 90 ha sido pastorear los procomunes digitales.

    Nos vemos en vuestro proceso de reclutamiento ;-)

    PS: esta es la versión en inglés de este artículo.

    23 de September de 2019

    LibreOffice Conference 2019 by numbers

    LibreOffice Conference 2019 badge

    LibreOffice Conference 2019 ended and… seems people really enjoyed!

    Here I provide some metrics about the conference. Hopefully they’ll be useful for next years.

    • Attendees:
      • 114 registered at website before Aug 31 deadline;
      • 122 total registered at the end of the conference;
      • 102 total of phisically registered at the conference.
    • Registered countries of origin: Albania, Austria, Belgium, Bolivia, Brazil, Canada, Czech Republic, Finland, France, Germany, Hungary, India, Ireland, Italy, Japan, Korea - Republic of, Luxembourg, Poland, Portugal, Romania, Russian Federation, Slovenia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey and United KingdomM
    • 4 days: 1 for board and community meetings and 3 for conference talks;
    • 3 tracks;
    • 68 talks, 6 GSoC presentations and 13 lightning talks;
    • 1 new individual certification;
    • 4 social events:
      • welcome party, 70 participants;
      • beach dinner party, 80 participants;
      • teathrical visit to the Alcazaba castle, 50 participants;
      • after conference city visit, 14 participants;
    • 1 hackfest, approximately 50 participants;
    • 1 conference shuttle service bus (capacity for more than 120 persons);
    • Telegram communications:
    • Conference pictures, at least:
    • Weather: two completely unexpected rainy days in Almería o_0
    • About economics, the conference ended with some superavit, which is nice. Thanks a lot to our sponsors for making this possible.

    Next are a list of data tables with other more information.

    Meals at university cafeteria:

        Sept. 10   Sept. 11   Sept. 12   Sept 13   Total
    meals: expected 70 106 106 107 389
    meals: served 54 92 97 86 329


    T-shirts, ordered to our friends of FreeWear:

    type   size (EU)   number
    unisex S 9
    unisex M 24
    unisex L 36
    unisex XL 15
    unisex XXL 15
    unisex XXXL 7
    unisex - tight S 1
    unisex - tight M 4
    unisex - tight L 2
      total 113


    The LibOCon overnight stays at Civitas were:

    day   number
    2019/09/05 1
    2019/09/06 1
    2019/09/07 5
    2019/09/08 32
    2019/09/09 57
    2019/09/10 75
    2019/09/11 77
    2019/09/12 77
    2019/09/13 64
    2019/09/14 13
    2019/09/15 3
    2019/09/16 3
    total overnights: 408


    Twitter campaign activity at @LibOCon:

    Month  tweets   impressions   profile visits   mentions   new followers
    Apr 2 2321 228 9 10
    May 6 8945 301 6 19
    Jun 3 3063 97 3 5
    Jul 3 5355 188 3 13
    Aug 10 8388 208 10 2
    Sept 75 51200 1246 158 (not available)
    totals: 99 79272 2268 189 49



    PS: I’m amazed I’ve not blogged almost nothing about the conference until now!!
    PD: Added the overnight numbers at the conference hotel.

    21 de July de 2019

    What am I doing with Tracker?

    “Colored net”by Chris Vees (priorité maison) is licensed under CC BY-NC-ND 2.0

    Some years ago I was asked to come up with some support for sandboxed apps wrt indexed data. This drummed up into Tracker 2.0 and domain ontologies, allowing those sandboxed apps to keep their own private data and collection of Tracker services to populate it.

    Fast forward to today and… this is still largely unused, Tracker-using flatpak applications still whitelist org.freedesktop.Tracker, and are thus allowed to read and change content there. Despite I’ve been told it’s been mostly lack of time… I cannot blame them, domain ontologies offer the perfect isolation at the cost of the perfect duplication. It may do the job, but is far from optimal.

    So I got asked again “we have a credible story for sandboxed tracker?”. One way or another, seems we don’t, back to the drawing board.

    Somehow, the web world seems to share some problems with our case, and seems to handle it with some degree of success. Let’s have a look at some excerpts of the Sparql 1.1 recommendation (emphasis mine):

    RDF is often used to represent, among other things, personal information, social networks, metadata about digital artifacts, as well as to provide a means of integration over disparate sources of information.

    A Graph Store is a mutable container of RDF graphs managed by a single service. […] named graphs can be added to or deleted from a Graph Store. […] a Graph Store can keep local copies of RDF graphs defined elsewhere […] independently of the original graph.

    The execution of a SERVICE pattern may fail due to several reasons: the remote service may be down, the service IRI may not be dereferenceable, or the endpoint may return an error to the query. […] Queries may explicitly allow failed SERVICE requests with the use of the SILENT keyword. […] (SERVICE pattern) results are returned to the federated query processor and are combined with results from the rest of the query.

    So according to Sparql 1.1, we have multiple “Graph Stores” that manage multiple RDF graphs. They may federate queries to other endpoints with disparate RDF formats and whose availability may vary. This remote data is transparent, and may be used directly or processed for local storage.

    Let’s look back at Tracker, we have a single Graph Store, which really is not that good at graphs. Responsibility of keeping that data updated is spread across multiple services, and ownership of that data is equally scattered.

    It snapped me, if we transpose those same concepts from the web to the network of local services that your session is, we can use those same mechanisms to cut a number of drawbacks short:

    • Ownership is clear: If a service wants to store data, it would get its own Graph Store instead of modifying “the one”. Unless explicitly supported, Graph Stores cannot be updated from the outside.
    • So is lifetime: There’s been debate about whether data indexed “in Tracker” is permanent data or a cache. Everyone would get to decide their best fit, unaffected by everyone else’s decisions. The data from tracker-miners would totally be a cache BTW :).
    • Increases trustability: If Graph Stores cannot be tampered with externally, you can trust their content to represent the best effort of their only producer, instead of the minimum common denominator of all services updating “the Graph Store”.
    • Gives a mechanism for data isolation: Graph Stores may choose limiting the number of graphs seen on queries federated from other services.
    • Is sandboxing friendly: From inside a sandbox, you may get limited access to the other endpoints you see, or to the graphs offered. Updates are also limited by nature.
    • But works the same without a sandbox. It also has some benefits, like reducing data duplication, and make for smaller databases.

    Domain ontologies from Tracker 2.0 also handle some of those differently, but very very roughly. So the first thing to do to get to that RDF nirvana was muscling up that Sparql support in Tracker, and so I did! I already had some “how could it be possible to do…” plans in my head to tackle most of those, but unfortunately they require changes to the internal storage format.

    As it seems the time to do one (FTR, storage format has been “unchanged” since 0.15) I couldn’t just do the bare minimum work, it seemed too much of a good opportunity to miss, instead of maybe making future changes for leftover Sparql 1.1 syntax support.

    Things ended up escalating into https://gitlab.gnome.org/GNOME/tracker/commits/wip/carlosg/sparql1.1, where It can be said that Tracker supports 100% of the Sparql 1.1 syntax. No buts, maybe bugs.

    Some notable additions are:

    • Graphs are fully supported there, along with all graph management syntax.
    • Support for query federation through SERVICE {}
    • Data dumping through DESCRIBE and CONSTRUCT query forms.
    • Data loading through LOAD update form.
    • The pesky negated property path operator.
    • Support for rdf:langString and rdf:List
    • All missing builtin functions

    This is working well, and is almost drop-in (One’s got to mind the graph semantics), making it material for Gnome 3.34 starts to sound realistic.

    As Sparql 1.1 is a recommendation finished in 2013, and no other newer versions seem to be in the works, I think it can be said Tracker is reaching maturity. Only HTTP Graph Store Protocol (because why not) remains the big-ish item to reasonably tell we implement all 11 documents. Note that Tracker’s bet for RDF and Sparql started at a time when 1.0 was the current document and 1.1 just an early draft.

    And sandboxing support? You might guess already the features it’ll draw from. It’s coming along, actually using Tracker as described above will go a bit deeper than the required query language syntax, more on that when I have the relevant pieces in place. I just thought I’d stop a moment to announce this huge milestone :).

    12 de July de 2019

    «La película», un relato corto

    Rescato de mis archivos una pequeña historia que escribí en marzo de 1998. Tal vez no sea una gran cosa pero creo que el resultado es bonito.


    La tarde prometía ser fantástica. Lo prometía el propio ciclo de cine. Mis amigos estaban haciendo un gran trabajo en aquel cineclub y por fin tenía la oportunidad de ver Dune, una de las películas más especiales del cine de ciencia ficción. Un buen momento para pasarlo bien, un buen momento para conocer el mundo un poco más. No llegaba siquiera a los diez y seis años.

    Llegué tarde y sólo pude coger sitio atrás. No era lo mejor pero al menos no me quedé fuera como otros. A mi lado estaba sentado Bearn. Éste Bearn era un tipo muy raro. Demasiado para mí en aquel entonces —ahora el freak soy yo— pero no era mal compañero: divertido y a su rollo. Apenas nos tratamos y supongo que su impresión de mí no debía ser demasiado diferente. Poco importa eso al fin y al cabo. Sólo es relevante el hecho de que él fue el único testigo de mi amargura, aquella tarde que aparece hoy en mi memoria.

    No sé exactamente cuando ocurrió. Sólo sabría decir que fue en el segundo curso del BUP. No sé ni en qué mes ni en qué estación. Apenas recuerdo al chico, y de ella, sólo una bruma, un deseo, una imagen de beldad liberada de todo defecto. Como todo recuerdo que se precie.

    Supongo que no tardé demasiado tiempo en darme cuenta de la extraña familiaridad de la chica que ocupaba la silla frente mí. Esa capacidad de reconocimiento está activada en modo automático en mí desde entonces. Aunque debo señalar al lector que ahora no hay un motivo especial para ello; simplemente es una costumbre y no me daña. Decía pues que pronto reconocí en los rasgos de aquella chica a los de la persona que me enajenaba desde hacía semanas. Meses. Por entonces no sé si ya le había mostrado mi amor —mi primera y única declaración de amor, un triste y solitario te quiero en la puerta del instituto en el único segundo que pude estar a solas con ella— el caso era que estaba allí y yo no estaba prevenido. Mi inquieto ego de amante juvenil se puso nervioso y mi sentido de observación se agudizó hasta la paranoia. Estaba sola. No podía estar sola una chica así. Nunca lo están. Se mueven a la sombra de un macho cuando no protegidas por la invulnerable empalizada de sus amigas. Y ella estaba sola. Mis ojos rastreaban todo el espacio alrededor suyo, sospechando aun del aire que respiraba. Y vinieron a pararse sobre un tipo sentado a su izquierda. Lo conocía muy poco, parecía buena gente y salió un par de veces con una compañera de clase. Un buen tipo que no encajaba en el asunto. Quedé perplejo cuando comprobé que realmente venían juntos. La película hacía minutos que había comenzado.

    El desamor en la juventud es algo muy intenso. Está vacío de toda realidad pero luego, con el tiempo, se añorará la pasión. Cuando los años han quemado el alma el desamor solo es amargura que aviva el fuego. En los malditos años de juventud es una virtud heroica. Tan estúpida como todas brillaba igualmente con desgarradora belleza. Aquella tarde, tal vez de invierno, mi corazón saltó en pedazos prendida la mecha con la chispa de dos manos que se cogieron. Y ninguna era mía. Aquella tarde el mundo se me vino encima, en paralelo al viaje iniciático del joven Atreides. Aquella tarde llené los lagos ocultos de Dune con mis lágrimas. Convidado de piedra, con el cielo rozando la punta de mis dedos, viví mi destierro del corazón en el desierto de un planeta desierto que por grande que hubiera sido nunca llenaría la soledad de mi pobre alma autocompadecida. Esta noche la película era otra. El desierto es el mismo.


    FidoNET R34: recuperando correo de las áreas de echomail

    Esta entrada ha sido originalmente publicada en el foro de esLibre https://charla.eslib.re.


    FidoNet logo

    Hace a unos pocos meses me propuse recuperar material digital en mis archivos sobre los primeros años de la comunidad Linux en España, en particular mis archivos de FidoNet. Surgió entonces una conversación en Twitter acerca del echomail R34.Linux y de la posibilidad de recuperar correo de aquella época para rescatarlo y republicarlo:

    Me pareció maravillosa la iniciativa de Kishpa_, pero al consultar mis datos encontré que en algún momento sufrí un casque de la base de mensajes y perdí todo el correo de ¿cinco años? o más. El súbito recuerdo de aquel día dolió casi tanto como entonces.

    GoldED editor

    Dado que muchos de los que anduvimos en los albores de HispaLinux nos movimos a partir de FidoNET mi consulta es la siguiente: ¿por alguna casualidad alguien ha superado las vicisitudes de la persistencia de la información a lo largo de las décadas y dispone de sus archivos FidoNET para recuperarlos y republicarlos? No sólo el correo de los áreas R34.Linux y R34.Unix, donde realmente nació todo, sino cualquier otro correo echomail archivado.

    En caso positivo hagámoslo llegar a Kishpa_. Es un proyecto bonito de recuperación de memoria digital aunque sólo sea a efectos de archivo.

    Venga: ¡todos a arrebuscar en nuestras copias de seguridad!

    Copia de la web de HispaLinux en 1998

    Aprovechando la sesión de examen de mis archivos noventeros cuelgo en esta web una instantánea de la web de la asociación HispaLinux de marzo de 1998:

    GoldED editor

    Ya sabéis: no lo hago por nostalgia sino por memoria digital.

    31 de January de 2019

    A mutter and gnome-shell update

    Some personal highlights:

    Emoji OSK

    The reworked OSK was featured a couple of cycles ago, but a notable thing that was still missing from the design reference was emoji input.

    No more, sitting in a branch as of yet:

    This UI feeds from the same emoji list than GtkEmojiChooser, and applies the same categorization/grouping, all the additional variants to an emoji are available as a popover. There’s also a (less catchy) keypad UI in place, ultimately hooked to applications through the GtkInputPurpose.

    I do expect this to be in place for 3.32 for the Wayland session.

    X11 vs Wayland

    Ever since the wayland work started on mutter, there’s been ideas and talks about how mutter “core” should become detached of X11 code. It has been a long and slow process, every design decision has been directed towards this goal, we leaped forward on 2017 GSOC, and eg. Georges sums up some of his own recent work in this area.

    For me it started with a “Hey, I think we are not that far off” comment in #gnome-shell earlier this cycle. Famous last words. After rewriting several, many, seemingly unrelated subsystems, and shuffling things here and there, and there we are to a point where gnome-shell might run with --no-x11 set. A little push more and we will be able to launch mutter as a pure wayland compositor that just spawns Xwayland on demand.

    What’s after that? It’s certainly an important milestone but by no means we are done here. Also, gnome-settings-daemon consists for the most part X11 clients, which spoils the fun by requiring Xwayland very early in a real session, guess what’s next!

    At the moment about 80% of the patches have been merged. I cannot assure at this point will all be in place for 3.32, but 3.34 most surely. But here’s a small yet extreme proof of work:

    Performance

    It’s been nice to see some of the performance improvements I did last cycle being finally merged. Some notable ones, like that one that stopped triggering full surface redraws on every surface invalidation. Also managed to get some blocking operations out of the main loop, which should fix many of the seemingly random stalls some people were seeing.

    Those are already in 3.31.x, with many other nice fixes in this area from Georges, Daniel Van Vugt et al.

    Fosdem

    As a minor note, I will be attending Fosdem and the GTK+ Hackfest happening right after. Feel free to say hi or find Wally, whatever comes first.

    29 de January de 2019

    Working on the Chromium Servicification Project

    Igalia & ChromiumIt’s been a few months already since I (re)joined Igalia as part of its Chromium team and I couldn’t be happier about it: right since the very first day, I felt perfectly integrated as part of the team that I’d be part of and quickly started making my way through the -fully upstream- project that would keep me busy during the following months: the Chromium Servicification Project.

    But what is this “Chromium servicification project“? Well, according to the Wiktionary the word “servicification” means, applied to computing, “the migration from monolithic legacy applications to service-based components and solutions”, which is exactly what this project is about: as described in the Chromium servicification project’s website, the whole purpose behind this idea is “to migrate the code base to a more modular, service-oriented architecture”, in order to “produce reusable and decoupled components while also reducing duplication”.

    Doing so would not only make Chromium a more manageable project from a source code-related point of view and create better and more stable interfaces to embed chromium from different projects, but should also enable teams to experiment with new features by combining these services in different ways, as well as to ship different products based in Chromium without having to bundle the whole world just to provide a particular set of features. 

    For instance, as Camille Lamy put it in the talk delivered (slides here) during the latest Web Engines Hackfest,  “it might be interesting long term that the user only downloads the bits of the app they need so, for instance, if you have a very low-end phone, support for VR is probably not very useful for you”. This is of course not the current status of things yet (right now everything is bundled into a big executable), but it’s still a good way to visualise where this idea of moving to a services-oriented architecture should take us in the long run.

    Chromium Servicification Layers

    With this in mind, the idea behind this project would be to work on the migration of the different parts of Chromium depending on those components that are being converted into services, which would be part of a “foundation” base layer providing the core services that any application, framework or runtime build on top of chromium would need.

    As you can imagine, the whole idea of refactoring such an enormous code base like Chromium’s is daunting and a lot of work, especially considering that currently ongoing efforts can’t simply be stopped just to perform this migration, and that is where our focus is currently aimed at: we integrate with different teams from the Chromium project working on the migration of those components into services, and we make sure that the clients of their old APIs move away from them and use the new services’ APIs instead, while keeping everything running normally in the meantime.

    At the beginning, we started working on the migration to the Network Service (which allows to run Chromium’s network stack even without a browser) and managed to get it shipped in Chromium Beta by early October already, which was a pretty big deal as far as I understand. In my particular case, that stage was a very short ride since such migration was nearly done by the time I joined Igalia, but still something worth mentioning due to the impact it had in the project, for extra context.

    After that, our team started working on the migration of the Identity service, where the main idea is to encapsulate the functionality of accessing the user’s identities right through this service, so that one day this logic can be run outside of the browser process. One interesting bit about this migration is that this particular functionality (largely implemented inside the sign-in component) has historically been located quite high up in the stack, and yet it’s now being pushed all the way down into that “foundation” base layer, as a core service. That’s probably one of the factors contributing to making this migration quite complicated, but everyone involved is being very dedicated and has been very helpful so far, so I’m confident we’ll get there in a reasonable time frame.

    If you’re curious enough, though, you can check this status report for the Identity service, where you can see the evolution of this particular migration, along with the impact our team had since we started working on this part, back on early October. There are more reports and more information in the mailing list for the Identity service, so feel free to check it out and/or subscribe there if you like.

    One clarification is needed, tough: for now, the scope of this migrations is focused on using the public C++ APIs that such services expose (see //services/<service_name>/public/cpp), but in the long run the idea is that those services will also provide Mojo interfaces. That will enable using their functionality regardless of whether you’re running those services as part of the browser’s process, or inside their own & separate processes, which will then allow the flexibility that chromium will need to run smoothly and safely in different kind of environments, from the least constrained ones to others with a less favourable set of resources at their disposal.

    And this is it for now, I think. I was really looking forward to writing a status update about what I’ve been up to in the past months and here it is, even though it’s not the shortest of all reports.

    FOSDEM 2019

    One last thing, though: as usual, I’m going to FOSDEM this year as well, along with a bunch of colleagues & friends from Igalia, so please feel free to drop me/us a line if you want to chat and/or hangout, either to talk about work-related matters or anything else really.

    And, of course, I’d be also more than happy to talk about any of the open job positions at Igalia, should you consider applying. There are quite a few of them available at the moment for all kind of things (most of them available for remote work): from more technical roles such as graphicscompilersmultimedia, JavaScript engines, browsers (WebKitChromium, Web Platform) or systems administration (this one not available for remotes, though), to other less “hands-on” types of roles like developer advocatesales engineer or project manager, so it’s possible there’s something interesting for you if you’re considering to join such an special company like this one.

    See you in FOSDEM!

    08 de January de 2019

    Epiphany automation mode

    Last week I finally found some time to add the automation mode to Epiphany, that allows to run automated tests using WebDriver. It’s important to note that the automation mode is not expected to be used by users or applications to control the browser remotely, but only by WebDriver automated tests. For that reason, the automation mode is incompatible with a primary user profile. There are a few other things affected by the auotmation mode:

    • There’s no persistency. A private profile is created in tmp and only ephemeral web contexts are used.
    • URL entry is not editable, since users are not expected to interact with the browser.
    • An info bar is shown to notify the user that the browser is being controlled by automation.
    • The window decoration is orange to make it even clearer that the browser is running in automation mode.

    So, how can I write tests to be run in Epiphany? First, you need to install a recently enough selenium. For now, only the python API is supported. Selenium doesn’t have an Epiphany driver, but the WebKitGTK driver can be used with any WebKitGTK+ based browser, by providing the browser information as part of session capabilities.

    from selenium import webdriver
    
    options = webdriver.WebKitGTKOptions()
    options.binary_location = 'epiphany'
    options.add_argument('--automation-mode')
    options.set_capability('browserName', 'Epiphany')
    options.set_capability('version', '3.31.4')
    
    ephy = webdriver.WebKitGTK(options=options, desired_capabilities={})
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    This is a very simple example that just opens Epiphany in automation mode, loads http://www.webkitgtk.org and closes Epiphany. A few comments about the example:

    • Version 3.31.4 will be the first one including the automation mode.
    • The parameter desired_capabilities shouldn’t be needed, but there’s a bug in selenium that has been fixed very recently.
    • WebKitGTKOptions.set_capability was added in selenium 3.14, if you have an older version you can use the following snippet instead
    from selenium import webdriver
    
    options = webdriver.WebKitGTKOptions()
    options.binary_location = 'epiphany'
    options.add_argument('--automation-mode')
    capabilities = options.to_capabilities()
    capabilities['browserName'] = 'Epiphany'
    capabilities['version'] = '3.31.4'
    
    ephy = webdriver.WebKitGTK(desired_capabilities=capabilities)
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    To simplify the driver instantation you can create your own Epiphany driver derived from the WebKitGTK one:

    from selenium import webdriver
    
    class Epiphany(webdriver.WebKitGTK):
        def __init__(self):
            options = webdriver.WebKitGTKOptions()
            options.binary_location = 'epiphany'
            options.add_argument('--automation-mode')
            options.set_capability('browserName', 'Epiphany')
            options.set_capability('version', '3.31.4')
    
            webdriver.WebKitGTK.__init__(self, options=options, desired_capabilities={})
    
    ephy = Epiphany()
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    The same for selenium < 3.14

    from selenium import webdriver
    
    class Epiphany(webdriver.WebKitGTK):
        def __init__(self):
            options = webdriver.WebKitGTKOptions()
            options.binary_location = 'epiphany'
            options.add_argument('--automation-mode')
            capabilities = options.to_capabilities()
            capabilities['browserName'] = 'Epiphany'
            capabilities['version'] = '3.31.4'
    
            webdriver.WebKitGTK.__init__(self, desired_capabilities=capabilities)
    
    ephy = Epiphany()
    ephy.get('http://www.webkitgtk.org')
    ephy.quit()
    

    25 de November de 2018

    Frogr 1.5 released

    It’s almost one year later and, despite the acquisition by SmugMug a few months ago and the predictions from some people that it would mean me stopping from using Flickr & maintaining Frogr, here comes the new release of frogr 1.5.Frogr 1.5 screenshot

    Not many changes this time, but some of them hopefully still useful for some people, such as the empty initial state that is now shown when you don’t have any pictures, as requested a while ago already by Nick Richards (thanks Nick!), or the removal of the applications menu from the shell’s top panel (now integrated in the hamburger menu), in line with the “App Menu Retirement” initiative.

    Then there were some fixes here and there as usual, and quite so many updates to the translations this time, including a brand new translation to Icelandic! (thanks Sveinn).

    So this is it this time, I’m afraid. Sorry there’s not much to report and sorry as well for the long time that took me to do this release, but this past year has been pretty busy between hectic work at Endless the first time of the year, a whole international relocation with my family to move back to Spain during the summer and me getting back to work at Igalia as part of the Chromium team, where I’m currently pretty busy working on the Chromium Servicification project (which is material for a completely different blog post of course).

    Anyway, last but not least, feel free to grab frogr from the usual places as outlined in its main website, among which I’d recommend the Flatpak method, either via GNOME Software  or from the command line by just doing this:

    flatpak install --from \
        https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

    For more information just check the main website, which I also updated to this latest release, and don’t hesitate to reach out if you have any questions or comments.

    Hope you enjoy it. Thanks!

    15 de November de 2018

    On the track for 3.32

    It happens sneakily, but there’s more things going on in the Tracker front than the occasional fallout. Yesterday 2.2.0-alpha1 was released, containing some notable changes.

    On and off during the last year, I’ve been working on a massive rework of the SPARQL parser. The current parser was fairly solid, but hard to extend for some of the syntax in the SPARQL 1.1 spec. After multiple attempts and failures at implementing property paths, I convinced myself this was the way forward.

    The main difference is that the previous parser was more of a serializer to SQL, just minimal state was preserved across the operation. The new parser does construct an expression tree so that nodes may be shuffled/reevaluated. This allows some sweet things:

    • Property paths are a nice resource to write more idiomatic SPARQL, most property path operators are within reach now. There’s currently support for sequence paths:

      # Get all files in my homedir
      SELECT ?elem {
        ?elem nfo:belongsToContainer/nie:url 'file:///home/carlos'
      }
      


      And inverse paths:

      # Get all files in my homedir by inverting
      # the child to container relation
      SELECT ?elem {
        ?homedir nie:url 'file:///home/carlos' ;
                 ^nfo:belongsToContainer ?elem
      }
      

      There’s harder ones like + and * that will require recursive selects, and there’s the negation (!) operator which is not possible to implement yet.

    • We now have prepared statements! A TrackerSparqlStatement object was introduced, capable of holding a query with parameters which can be set/replaced prior to execution.

      conn = tracker_sparql_connection_get (NULL, NULL);
      stmt = tracker_sparql_connection_query_statement (conn,
                                                        "SELECT ?u { ?u fts:match ~term }",
                                                        NULL, NULL);
      
      tracker_sparql_statement_bind_string (stmt, "term", search_term);
      cursor = tracker_sparql_statement_execute (stmt, NULL, NULL);
      

      This is a long sought protection for injections. The object is cacheable and can service multiple cursors asynchronously, so it will also be an improvement for frequent queries.

    • More concise SQL is generated at places, which brings slight improvements on SQLite query planning.

    This also got the ideas churning towards future plans, the trend being a generic triple store as much sparql1.1 capable as possible. There’s also some ideas about better data isolation for Flatpak and sandboxes in general (seeing the currently supported approach didn’t catch on). Those will eventually happen in this or following cycles, but I’ll reserve that for other blog post.

    An eye was kept on memory usage too (mostly unrealized ideas from the performance hackfest earlier this year), tracker-store has been made to automatically shutdown when unneeded (ideally most of the time, since it just takes care of updates and the unruly apps that use the bus connection), and tracker-miner-fs took over the functionality of tracker-miner-apps. That’s 2 processes less in your default session.

    In general, we’re on the way to an exciting release, and there’s more to come!

    25 de October de 2018

    3 events in a month

    As part of my job at Igalia, I have been attending 2-3 events per year. My role mostly as a Chromium stack engineer is not usually much demanding regarding conference trips, but they are quite important as an opportunity to meet collaborators and project mates.

    This month has been a bit different, as I ended up visiting Santa Clara LG Silicon Valley Lab in California, Igalia headquarters in A Coruña, and Dresden. It was mostly because I got involved in the discussions for the web runtime implementation being developed by Igalia for AGL.

    AGL f2f at LGSVL

    It is always great to visit LG Silicon Valley Lab (Santa Clara, US), where my team is located. I have been participating for 6 years in the development of the webOS web stack you can most prominently enjoy in LG webOS smart TV.

    One of the goals for next months at AGL is providing an efficient web runtime. In LGSVL we have been developing and maintaining WAM, the webOS web runtime. And as it was released with an open source license in webOS Open Source Edition, it looked like a great match for AGL. So my team did a proof of concept in May and it was succesful. At the same time Igalia has been working on porting Chromium browser to AGL. So, after some discussions AGL approved sponsoring my company, Igalia for porting the LG webOS web runtime to AGL.

    As LGSVL was hosting the september 2018 AGL f2f meeting, Igalia sponsored my trip to the event.

    AGL f2f Santa Clara 2018, AGL wiki CC BY 4.0

    So we took the opportunity to continue discussions and progress in the development of the WAM AGL port. And, as we expected, it was quite beneficial to unblock tasks like AGL app framework security integration, and the support of AGL latest official release, Funky Flounder. Julie Kim from Igalia attended the event too, and presented an update on the progress of the Ozone Wayland port.

    The organization and the venue were great. Thanks to LGSVL!

    Web Engines Hackfest 2018 at Igalia

    Next trip was definitely closer. Just 90 minutes drive to our Igalia headquarters in A Coruña.


    Igalia has been organizing this event since 2009. It is a cross-web-engine event, where engineers of Mozilla, Chromium and WebKit have been meeting yearly to do some hacking, and discuss the future of the web.

    This time my main interest was participating in the discussions about the effort by Igalia and Google to support Wayland natively in Chromium. I was pleased to know around 90% of the work had already landed in upstream Chromium. Great news as it will smooth integration of Chromium for embedders using Ozone Wayland, like webOS. It was also great to know the work for improving GPU performance reducing the number of copies required for painting web contents.

    Web Engines Hackfest 2018 CC BY-SA 2.0

    Other topics of my interest:
    – We did a follow-up of the discussion in last BlinkOn about the barriers for Chromium embedders, sharing the experiences maintaining a downstream Chromium tree.
    – Joined the discussions about the future of WebKitGTK. In particular the graphics pipeline adaptation to the upcoming GTK+ 4.

    As usual, the organization was great. We had 70 people in the event, and it was awesome to see all the activity in the office, and so many talented engineers in the same place. Thanks Igalia!

    Web Engines Hackfest 2018 CC BY-SA 2.0

    AGL All Members Meeting Europe 2018 at Dresden

    The last event in barely a month was my first visit to the beautiful town of Dresden (Germany).

    The goal was continuing the discussions for the projects Igalia is developing for AGL platform: Chromium upstream native Wayland support, and the WAM web runtime port. We also had a booth showcasing that work, but also our lightweight WebKit port WPE that was, as usual, attracting interest with its 60fps video playback performance in a Raspberry Pi 2.

    I co-presented with Steve Lemke a talk about the automotive activities at LGSVL, taking the opportunity to update on the status of the WAM web runtime work for AGL (slides here). The project is progressing and Igalia should be landing soon the first results of the work.

    Igalia booth at AGL AMM Europe 2018

    It was great to meet all this people, and discuss in person the architecture proposal for the web runtime, unblocking several tasks and offering more detailed planning for next months.

    Dresden was great, and I can’t help highlighting the reception and guided tour in the Dresden Transportation Museum. Great choice by the organization. Thanks to Linux Foundation and the AGL project community!

    Next: Chrome Dev Summit 2018

    So… what’s next? I will be visiting San Francisco in November for Chrome Dev Summit.

    I can only thank Igalia for sponsoring my attendance to these events. They are quite important for keeping things moving forward. But also, it is also really nice to meet friends and collaborators. Thanks Igalia!

    03 de August de 2018

    On Moving

    Winds of Change. One of my favourite songs ever and one that comes to my mind now that me and my family are going through quite some important changes, once again. But let’s start from the beginning…

    A few years ago, back in January 2013, my family and me moved to the UK as the result of my decision to leave Igalia after almost 7 years in the company to embark ourselves in the “adventure” or living abroad. This was an idea we had been thinking about for a while already at that time, and our current situation back then suggested that it could be the right moment to try it out… so we did.

    It was kind of a long process though: I first arrived alone in January to make sure I would have time to figure things out and find a permanent place for us to live in, and then my family joined me later in May, once everything was ready. Not great, if you ask me, to be living separated from your loved ones for 4 full months, not to mention the juggling my wife had to do during that time to combine her job with looking after the kids mostly on her own… but we managed to see each other every 2-3 weekends thanks to the London – Coruña direct flights in the meantime, so at least it was bearable from that point of view.

    But despite of those not so great (yet expected) beginnings, I have to say that this past 5+ years have been an incredible experience overall, and we don’t have a single regret about making the decision to move, maybe just a few minor and punctual things only if I’m completely honest, but that’s about it. For instance, it’s been just beyond incredible and satisfying to see my kids develop their English skills “from zero to hero”, settle at their school, make new friends and, in one word, evolve during these past years. And that alone would have been a good reason to justify the move already, but it turns out we also have plenty of other reasons as we all have evolved and enjoyed the ride quite a lot as well, made many new friends, knew many new places, worked on different things… a truly enriching experience indeed!

    In a way, I confess that this could easily be one of those things we’d probably have never done if we knew in advance of all the things we’d have to do and go through along the way, so I’m very grateful for that naive ignorance, since that’s probably how we found the courage, energy and time to do it. And looking backwards, it seems clear to me that it was the right time to do it.

    But now it’s 2018 and, even though we had such a great time here both from personal and work-related perspectives, we have decided that it’s time for us to come back to Galicia (Spain), and try to continue our vital journey right from there, in our homeland.

    And before you ask… no, this is not because of Brexit. I recognize that the result of the referendum has been a “contributing factor” (we surely didn’t think as much about returning to Spain before that 23 of June, that’s true), but there were more factors contributing to that decision, which somehow have aligned all together to tell us, very clearly, that Now It’s The Time…

    For instance, we always knew that we would eventually move back for my wife to take over the family business, and also that we’d rather make the move in a way that it would be not too bad for our kids when it happened. And having a 6yo and a 9yo already it feels to us like now it’s the perfect time, since they’re already native English speakers (achievement unlocked!) and we believe that staying any longer would only make it harder for them, especially for my 9yo, because it’s never easy to leave your school, friends and place you call home behind when you’re a kid (and I know that very well, as I went through that painful experience precisely when I was 9).

    Besides that, I’ve also recently decided to leave Endless after 4 years in the company and so it looks like, once again, moving back home would fit nicely with that work-related change, for several reasons. Now, I don’t want to enter into much detail on why exactly I decided to leave Endless, so I think I’ll summarize it as me needing a change and a rest after these past years working on Endless OS, which has been an equally awesome and intense experience as you can imagine. If anything, I’d just want to be clear on that contributing to such a meaningful project surrounded by such a team of great human beings, was an experience I couldn’t be happier and prouder about, so you can be certain it was not an easy decision to make.

    Actually, quite the opposite: a pretty hard one I’d say… but a nice “side effect” of that decision, though, is that leaving at this precise moment would allow me to focus on the relocation in a more organized way as well as to spend some quality time with my family before leaving the UK. Besides, it will hopefully be also useful for us to have enough time, once in Spain, to re-organize our lives there, settle properly and even have some extra weeks of true holidays before the kids start school and we start working again in September.

    Now, taking a few weeks off and moving back home is very nice and all that, but we still need to have jobs, and this is where our relocation gets extra interesting as it seems that we’re moving home in multiple ways at once…

    For once, my wife will start taking over the family business with the help of her dad in her home town of Lalín (Pontevedra), where we plan to be living for the foreseeable future. This is the place where she grew up and where her family and many friends live in, but also a place she hasn’t lived in for the last 15 years, so the fact that we’ll be relocating there is already quite a thing in the “moving back home” department for her…

    Second, for my kids this will mean going back to having their relatives nearby once again as well as friends they only could see and play with during holidays until now, which I think it’s a very good thing for them. Of course, this doesn’t feel as much moving home for them as it does for us, since they obviously consider the UK their home for now, but our hope is that it will be ok in the medium-long term, even though it will likely be a bit challenging for them at the beginning.

    Last, I’ll be moving back to work at Igalia after almost 6 years since I left which, as you might imagine, feels to me very much like “moving back home” too: I’ll be going back to working in a place I’ve always loved so much for multiple reasons, surrounded by people I know and who I consider friends already (I even would call some of them “best friends”) and with its foundations set on important principles and values that still matter very much to me, both from technical (e.g. Open Source, Free Software) and not so technical (e.g. flat structure, independence) points of view.

    Those who know me better might very well think that I’ve never really moved on as I hinted in the title of the blog post I wrote years ago, and in some way that’s perhaps not entirely wrong, since it’s no secret I always kept in touch throughout these past years at many levels and that I always felt enormously proud of my time as an Igalian. Emmanuele even told me that I sometimes enter what he seems to call an “Igalia mode” when I speak of my past time in there, as if I was still there… Of course, I haven’t seen any formal evidence of such thing happening yet, but it certainly does sound like a possibility as it’s true I easily get carried away when Igalia comes to my mind, maybe as a mix of nostalgia, pride, good memories… those sort of things. I suppose he’s got a point after all…

    So, I guess it’s only natural that I finally decided to apply again since, even though both the company and me have evolved quite a bit during these years, the core foundations and principles it’s based upon remain the same, and I still very much align with them. But applying was only one part, so I couldn’t finish this blog post without stating how grateful I am for having been granted this second opportunity to join Igalia once again because, being honest, more often than less I was worried on whether I would be “good enough” for the Igalia of 2018. And the truth is that I won’t know for real until I actually start working and stay in the company for a while, but knowing that both my former colleagues and newer Igalians who joined since I left trust me enough to join is all I need for now, and I couldn’t be more excited nor happier about it.

    Anyway, this post is already too long and I think I’ve covered everything I wanted to mention On Moving (pun intended with my post from 2012, thanks Will Thompson for the idea!), so I think I’ll stop right here and re-focus on the latest bits related to the relocation before we effectively leave the UK for good, now that we finally left our rented house and put all our stuff in a removals van. After that, I expect a few days of crazy unpacking and bureaucracy to properly settle in Galicia and then hopefully a few weeks to rest and get our batteries recharged for our new adventure, starting soon in September (yet not too soon!).

    As usual, we have no clue of how future will be, but we have a good feeling about this thing of moving back home in multiple ways, so I believe we’ll be fine as long as we stick together as a family as we always did so far.

    But in any case, please wish us good luck.That’s always welcome! :-)

    17 de May de 2018

    Performance hackfest

    Last evening I came back from the GNOME performance hackfest happening in Cambridge. There was plenty of activity, clear skies, and pub evenings. Here’s some incomplete and unordered items, just the ones I could do/remember/witness/talk/overhear:

    • Xwayland 1.20 seems to be a big battery saver. Christian Kellner noticed that X11 Firefox playing Youtube could take his laptop to >20W consumption, traced to fairly intensive GPU activity. One of the first things we did was trying master, which dropped power draw to 8-9W. We presumed this was due to the implementation of the Present extension.
    • I was looking into dropping the gnome-shell usage of AtspiEventListener for the OSK, It is really taxing on CPU usage (even if the events we want are a minuscule subset, gnome-shell will forever get all that D-Bus traffic, and a11y is massively verbose), plus it slowly but steadily leaks memory.

      For the other remaining path I started looking into at least being able to deinitialize it. The leak deserves investigation, but I thought my time could be better invested on other things than learning yet another codebase.

    • Jonas Ã…dahl and Christian Hergert worked towards having Mutter dump detailed per-frame information, and Sysprof able to visualize it. This is quite exciting as all classic options just let us know where do we spend time overall, but doesn’t let us know whether we missed the frame mark, nor why precisely would that be. Update: I’ve been pointed out that Eric Anholt also worked on GPU perf events in mesa/vc4, so this info could also be visualized through sysprof
    • Peter Robinson and Marco Trevisan run into some unexpected trouble when booting GNOME in an ARM board with no input devices whatsoever. I helped a bit with debugging and ideas, Marco did some patches to neatly handle this situation.
    • Hans de Goede did some nice progress towards having the GDM session consume as little as possible while switched away from it.
    • Some patch review went on, Jonas/Marco/me spent some time looking at a screen very close and discussing the mipmapping optimizations from Daniel Van Vugt.
    • I worked towards fixing the reported artifact from my patches to aggressively cache paint volumes. These are basically one-off cases where individual ClutterActors break the invariants that would make caching possible.
    • Christian Kellner picked up my idea of performing pointer picking purely on the CPU side when the stage purely consists of 2D actors, instead of using the usual GL approach of “repaint in distinctive colors, read pixel to perform hit detection” which is certainly necessary for 3D, but quite a big roundtrip for 2D.
    • Alberto Ruiz and Richard Hughes talked about how to improve gnome-software memory usage in the background.
    • Alberto and me briefly dabbled with the idea of having specific search provider API that were more tied to Tracker, in order to ease the many context switches triggered by overview search.
    • On the train ride back, I unstashed and continued work on a WIP tracker-miners patch to have tracker-extract able to shutdown on inactivity. One less daemon to have usually running.

    Overall, it was a nice and productive event. IMO having people with good knowledge both deep in the stack and wide in GNOME was determining, I hope we can repeat this feat again soon!

    06 de May de 2018

    Updating Endless OS to GNOME Shell 3.26 (Video)

    It’s been a pretty hectic time during the past months for me here at Endless, busy with updating our desktop to the latest stable version of GNOME Shell (3.26, at the time the process started), among other things. And in all this excitement, it seems like I forgot to blog so I think this time I’ll keep it short for once, and simply link to a video I made a couple of months ago, right when I was about to finish the first phase of the process (which ended up taking a bit longer than expected).

    Note that the production of this video is far from high quality (unsurprisingly), but the feedback I got so far is that it has been apparently very useful to explain to less technically inclined people what doing a rebase of this characteristics means, and with that in mind I woke up this morning realizing that it might be good to give it its own entry in my personal blog, so here it is.


    (Pro-tip: Enable video subtitles to see contextual info)

    Granted, this hasn’t been a task as daunting as The Great Rebase I was working on one year ago, but still pretty challenging for a different set of reasons that I might leave for a future, and more detailed, post.

    Hope you enjoy watching the video as much as I did making it.

    21 de March de 2018

    Updated Chromium Legacy Wayland Support

    Introduction

    Future Ozone Wayland backend is still not ready for shipping. So we are announcing the release of an updated Ozone Wayland backend for Chromium, based on the implementation provided by Intel. It is rebased on top of latest stable Chromium release and you can find it in my team Github. Hope you will appreciate it.

    Official Chromium on Linux desktop nowadays

    Linux desktop is progressively migrating to use Wayland as the display server. It is the default option in Fedora, Ubuntu ~~and, more importantly, the next Ubuntu Long Term Support release will ship Gnome Shell Wayland display server by default~~ (P.S. since this post was originally written, Ubuntu has delayed the Wayland adoption for LTS).

    As is, now, Chromium browser for Linux desktop support is based on X11. This means it will natively interact with an X server and with its XDG extensions for displaying the contents and receiving user events. But, as said, next generation of Linux desktop will be using Wayland display servers instead of X11. How is it working? Using XWayland server, a full X11 server built on top of Wayland protocol. Ok, but that has an impact on performance. Chromium needs to communicate and paint to X11 provided buffers, and then, those buffers need to be shared with Wayland display server. And the user events will need to be proxied from the Wayland display server through the XWayland server and X11 protocol. It requires more resources: more memory, CPU, and GPU. And it adds more latency to the communication.

    Ozone

    Chromium supports officially several platforms (Windows, Android, Linux desktop, iOS). But it provides abstractions for porting it to other platforms.

    The set of abstractions is named Ozone (more info here). It allows to implement one or more platform components with the hooks for properly integrating with a platform that is in the set of officially supported targets. Among other things it provides abstractions for:
    * Obtaining accelerated surfaces.
    * Creating and obtaining windows to paint the contents.
    * Interacting with the desktop cursor.
    * Receiving user events.
    * Interacting with the window manager.

    Chromium and Wayland (2014-2016)

    Even if Wayland was not used on Linux desktop, a bunch of embedded devices have been using Wayland for their display server for quite some time. LG has been shipping a full Wayland experience on the webOS TV products.

    In the last 4 years, Intel has been providing an implementation of Ozone abstractions for Wayland. It was an amazing work that allowed running Chromium browser on top of a Wayland compositor. This backend has been the de facto standard for running Chromium browser on all these Wayland-enabled embedded devices.

    But the development of this implementation has mostly stopped around Chromium 49 (though rebases on top of Chromium 51 and 53 have been provided).

    Chromium and Wayland (2018+)

    Since the end of 2016, Igalia has been involved on several initiatives to allow Chromium to run natively in Wayland. Even if this work is based on the original Ozone Wayland backend by Intel, it is mostly a rewrite and adaptation to the future graphics architecture in Chromium (Viz and Mus).

    This is being developed in the Igalia GitHub, downstream, though it is expected to be landed upstream progressively. Hopefully, at some point in 2018, this new backend will be fully ready for shipping products with it. But we are still not there. ~~Some major missing parts are Wayland TextInput protocol and content shell support~~ (P.S. since this was written, both TextInput and content shell support are working now!).

    More information on these posts from the authors:
    * June 2016: Understanding Chromium’s runtime ozone platform selection (by Antonio Gomes).
    * October 2016: Analysis of Ozone Wayland (by Frédéric Wang).
    * November 2016: Chromium, ozone, wayland and beyond (by Antonio Gomes).
    * December 2016: Chromium on R-Car M3 & AGL/Wayland (by Frédéric Wang).
    * February 2017: Mus Window System (by Frédéric Wang).
    * May 2017: Chromium Mus/Ozone update (H1/2017): wayland, x11 (by Antonio Gomes).
    * June 2017: Running Chromium m60 on R-Car M3 board & AGL/Wayland (by Maksim Sisov).

    Releasing legacy Ozone Wayland backend (2017-2018)

    Ok, so new Wayland backend is still not ready in some cases, and the old one is unmaintained. For that reason, LG is announcing the release of an updated legacy Ozone Wayland backend. It is essentially the original Intel backend, but ported to current Chromium stable.

    Why? Because we want to provide a migration path to the future Ozone Wayland backend. And because we want to share this effort with other developers, willing to run Chromium in Wayland immediately, or that are still using the old backend and cannot immediately migrate to the new one.

    WARNING If you are starting development for a product that is going to happen in 1-2 years… Very likely your best option is already migrating now to the new Ozone Wayland backend (and help with the missing bits). We will stop maintaining it ourselves once new Ozone Wayland backend lands upstream and covers all our needs.

    What does this port include?
    * Rebased on top of Chromium m60, m61, m62 and m63.
    * Ported to GN.
    * It already includes some changes to adapt to the new Ozone Wayland refactors.

    It is hosted at https://github.com/lgsvl/chromium-src.

    Enjoy it!

    Originally published at webOS Open Source Edition Blog. and licensed under Creative Commons Attribution 4.0.

    28 de December de 2017

    Frogr 1.4 released

    Another year goes by and, again, I feel the call to make one more release just before 2017 over, so here we are: frogr 1.4 is out!

    Screenshot of frogr 1.4

    Yes, I know what you’re thinking: “Who uses Flickr in 2017 anyway?”. Well, as shocking as this might seem to you, it is apparently not just me who is using this small app, but also another 8,935 users out there issuing an average of 0.22 Queries Per Second every day (19008 queries a day) for the past year, according to the stats provided by Flickr for the API key.

    Granted, it may be not a huge number compared to what other online services might be experiencing these days, but for me this is enough motivation to keep the little green frog working and running, thus worth updating it one more time. Also, I’d argue that these numbers for a niche app like this one (aimed at users of the Linux desktop that still use Flickr to upload pictures in 2017) do not even look too bad, although without more specific data backing this comment this is, of course, just my personal and highly-biased opinion.

    So, what’s new? Some small changes and fixes, along with other less visible modifications, but still relevant and necessary IMHO:

    • Fixed integration with GNOME Software (fixed a bug regarding appstream data).
    • Fixed errors loading images from certain cameras & phones, such as the OnePlus 5.
    • Cleaned the code by finally migrating to using g_auto, g_autoptr and g_autofree.
    • Migrated to the meson build system, and removed all the autotools files.
    • Big update to translations, now with more than 22 languages 90% – 100% translated.

    Also, this is the first release that happens after having a fully operational centralized place for Flatpak applications (aka Flathub), so I’ve updated the manifest and I’m happy to say that frogr 1.4 is already available for i386, arm, aarch64 and x86_64. You can install it either from GNOME Software (details on how to do it at https://flathub.org), or from the command line by just doing this:

    flatpak install --from https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

    Also worth mentioning that, starting with Frogr 1.4, I will no longer be updating my PPA at Launchpad. I did that in the past to make it possible for Ubuntu users to have access to the latest release ASAP, but now we have Flatpak that’s a much better way to install and run the latest stable release in any supported distro (not just Ubuntu). Thus, I’m dropping the extra work required to deal with the PPA and flat-out recommending users to use Flatpak or wait until their distro of choice packages the latest release.

    And I think this is everything. As usual, feel free to check the main website for extra information on how to get frogr and/or how to contribute to it. Feedback and/or help is more than welcome.

    Happy new year everyone!

    07 de December de 2017

    OSK update

    There’s been a rumor that I was working on improving gnome-shell on-screen keyboard, what’s been up here? Let me show you!

    The design has been based on the mockups at https://wiki.gnome.org/Design/OS/ScreenKeyboard, here’s how it looks in English (mind you, hasn’t gone through theming wizards):

    The keymaps get generated from CLDR (see here), which helped boost the number of supported scripts (c.f. caribou), some visual examples:

    As you can see there’s still a few ugly ones, the layouts aren’t as uniform as one might expect, these issues will be resolved over time.

    The additional supported scripts don’t mean much without a way to send those fancy chars/strings to the client. We traditionally were just able to send forged keyboard events, which means we were restricted to keycodes that had a representation in the current keymap. On X11 we are kind of stuck with that, but we can do better on Wayland, this work relies on a simplified version of the text input protocol that I’m doing the last proofreading before proposing as v3 (the branches currently use a private copy). Using an specific protocol allows for sending UTF8 strings independently of the keymap, very convenient too for text completion.

    But there are keymaps where CLDR doesn’t dare going, prominent examples are Chinese or Japanese. For those, I’m looking into properly leveraging IBus so pinyin-like input methods work by feeding the results into the suggestions box:

    Ni Hao!

    The suggestion box even kind of works with the typing booster ibus IM. But you have to explicitly activate it, there is room for improvement here in the future.

    And there is of course still bad stuff and todo items. Some languages like Korean neither have a layout, nor input methods that accept latin input, so they are badly handled (read: not at all). It would also be nice to support shape-based input.

    Other missing things from the mockups are the special numeric and emoji keymaps, there’s some unpushed work towards supporting those, but I had to draw the line somewhere!

    The work has been pushed in mutter, gtk+ and gnome-shell branches, which I hope will get timely polished and merged this cycle 🙂

    09 de September de 2017

    WebDriver support in WebKitGTK+ 2.18

    WebDriver is an automation API to control a web browser. It allows to create automated tests for web applications independently of the browser and platform. WebKitGTK+ 2.18, that will be released next week, includes an initial implementation of the WebDriver specification.

    WebDriver in WebKitGTK+

    There’s a new process (WebKitWebDriver) that works as the server, processing the clients requests to spawn and control the web browser. The WebKitGTK+ driver is not tied to any specific browser, it can be used with any WebKitGTK+ based browser, but it uses MiniBrowser as the default. The driver uses the same remote controlling protocol used by the remote inspector to communicate and control the web browser instance. The implementation is not complete yet, but it’s enough for what many users need.

    The clients

    The web application tests are the clients of the WebDriver server. The Selenium project provides APIs for different languages (Java, Python, Ruby, etc.) to write the tests. Python is the only language supported by WebKitGTK+ for now. It’s not yet upstream, but we hope it will be integrated soon. In the meantime you can use our fork in github. Let’s see an example to understand how it works and what we can do.

    from selenium import webdriver
    
    # Create a WebKitGTK driver instance. It spawns WebKitWebDriver 
    # process automatically that will launch MiniBrowser.
    wkgtk = webdriver.WebKitGTK()
    
    # Let's load the WebKitGTK+ website.
    wkgtk.get("https://www.webkitgtk.org")
    
    # Find the GNOME link.
    gnome = wkgtk.find_element_by_partial_link_text("GNOME")
    
    # Click on the link. 
    gnome.click()
    
    # Find the search form. 
    search = wkgtk.find_element_by_id("searchform")
    
    # Find the first input element in the search form.
    text_field = search.find_element_by_tag_name("input")
    
    # Type epiphany in the search field and submit.
    text_field.send_keys("epiphany")
    text_field.submit()
    
    # Let's count the links in the contents div to check we got results.
    contents = wkgtk.find_element_by_class_name("content")
    links = contents.find_elements_by_tag_name("a")
    assert len(links) > 0
    
    # Quit the driver. The session is closed so MiniBrowser 
    # will be closed and then WebKitWebDriver process finishes.
    wkgtk.quit()
    

    Note that this is just an example to show how to write a test and what kind of things you can do, there are better ways to achieve the same results, and it depends on the current source of public websites, so it might not work in the future.

    Web browsers / applications

    As I said before, WebKitWebDriver process supports any WebKitGTK+ based browser, but that doesn’t mean all browsers can automatically be controlled by automation (that would be scary). WebKitGTK+ 2.18 also provides new API for applications to support automation.

    • First of all the application has to explicitly enable automation using webkit_web_context_set_automation_allowed(). It’s important to know that the WebKitGTK+ API doesn’t allow to enable automation in several WebKitWebContexts at the same time. The driver will spawn the application when a new session is requested, so the application should enable automation at startup. It’s recommended that applications add a new command line option to enable automation, and only enable it when provided.
    • After launching the application the driver will request the browser to create a new automation session. The signal “automation-started” will be emitted in the context to notify the application that a new session has been created. If automation is not allowed in the context, the session won’t be created and the signal won’t be emitted either.
    • A WebKitAutomationSession object is passed as parameter to the “automation-started” signal. This can be used to provide information about the application (name and version) to the driver that will match them with what the client requires accepting or rejecting the session request.
    • The WebKitAutomationSession will emit the signal “create-web-view” every time the driver needs to create a new web view. The application can then create a new window or tab containing the new web view that should be returned by the signal. This signal will always be emitted even if the browser has already an initial web view open, in that case it’s recommened to return the existing empty web view.
    • Web views are also automation aware, similar to ephemeral web views, web views that allow automation should be created with the constructor property “is-controlled-by-automation” enabled.

    This is the new API that applications need to implement to support WebDriver, it’s designed to be as safe as possible, but there are many things that can’t be controlled by WebKitGTK+, so we have several recommendations for applications that want to support automation:

    • Add a way to enable automation in your application at startup, like a command line option, that is disabled by default. Never allow automation in a normal application instance.
    • Enabling automation is not the only thing the application should do, so add an automation mode to your application.
    • Add visual feedback when in automation mode, like changing the theme, the window title or whatever that makes clear that a window or instance of the application is controllable by automation.
    • Add a message to explain that the window is being controlled by automation and the user is not expected to use it.
    • Use ephemeral web views in automation mode.
    • Use a temporal user profile in application mode, do not allow automation to change the history, bookmarks, etc. of an existing user.
    • Do not load any homepage in automation mode, just keep an empty web view (about:blank) that can be used when a new web view is requested by automation.

    The WebKitGTK client driver

    Applications need to implement the new automation API to support WebDriver, but the WebKitWebDriver process doesn’t know how to launch the browsers. That information should be provided by the client using the WebKitGTKOptions object. The driver constructor can receive an instance of a WebKitGTKOptions object, with the browser information and other options. Let’s see how it works with an example to launch epiphany:

    from selenium import webdriver
    from selenium.webdriver import WebKitGTKOptions
    
    options = WebKitGTKOptions()
    options.browser_executable_path = "/usr/bin/epiphany"
    options.add_browser_argument("--automation-mode")
    epiphany = webdriver.WebKitGTK(browser_options=options)
    

    Again, this is just an example, Epiphany doesn’t even support WebDriver yet. Browsers or applications could create their own drivers on top of the WebKitGTK one to make it more convenient to use.

    from selenium import webdriver
    epiphany = webdriver.Epiphany()
    

    Plans

    During the next release cycle, we plan to do the following tasks:

    • Complete the implementation: add support for all commands in the spec and complete the ones that are partially supported now.
    • Add support for running the WPT WebDriver tests in the WebKit bots.
    • Add a WebKitGTK driver implementation for other languages in Selenium.
    • Add support for automation in Epiphany.
    • Add WebDriver support to WPE/dyz.

    22 de August de 2017

    Tracker requires SQLite >= 3.20 to be compiled with –enable-fts5

    Tracker is one of these pieces of software that get no special praise when things work, but you wake up to personal insults on bugzilla when they don’t, today is one of those days.

    Several distros have been eager to push SQLite 3.20.0 still hot from the oven to their users, apparently ignoring the API and ABI incompatibilities that are described in the changelog. These do hit Tracker, and are only made visible at runtime.

    Furthermore, there is further undocumented ABI breakage that makes FTS5 modules generated from pre/post 3.20.0 SQLite code backward and forward incompatible with the other versions. Tracker used to ship a copy of the FTS5 module, but this situation is not tenable anymore.

    The solution then? Making it mandatory that SQLite >= 3.20.0 must have FTS5 builtin. The just released Tracker 1.12.3 and 1.99.3 will error hard if that is not the case.

    I’ve just sent this mail to distributor-list:

    Hi all,
    
    Sqlite 3.20.0 broke API/ABI for Tracker purposes. The change described in point 3 at http://sqlite.org/releaselog/3_20_0.html is not only backwards incompatible, but also brings backwards and forwards incompatibilities with standalone FTS5 modules like Tracker ships [1], all of those are only visible at runtime [2].
    
    FTS5 modules generated from SQLite < 3.20.0 won't work with >= 3.20.0, and the other way around. Since it's not tenable to ship multiple FTS5 module copies for pre/post 3.20.0, Tracker shall now make it a hard requirement that SQLite is compiled with builtin FTS5 (--enable-fts5) if SQLite >= 3.20.0 is found. The current Tracker FTS5 module is kept for older SQLite versions.
    
    This change applies to Tracker >=1.12.3 and >=1.99.3. I don't know if any distro pushed SQLite 3.20.0 in combination with a Tracker that is older than that, but it will be just as broken, those would require additional patches from the tracker-1.12 branch besides the described change.
    
    Please handle this promptly, wherever there's sqlite 3.20.0 without builtin FTS5 and/or tracker <= 1.12.2/1.99.2, there's unhappy Tracker users.
    
    Cheers,
      Carlos
    
    [1] Generated as described in
    http://sqlite.org/fts5.html#building_a_loadable_extension
    [2] https://bugzilla.gnome.org/show_bug.cgi?id=785883
    

    So, if you see errors about “TrackerTokenizer” something, please contact your distro packagers. I’ll close further incoming bugs as NOTGNOME.

    04 de August de 2017

    Back from GUADEC

    After spending a few days in Manchester with other fellow GNOME hackers and colleagues from Endless, I’m finally back at my place in the sunny land of Surrey (England) and I thought it would be nice to write some sort of recap, so here it is:

    The Conference

    Getting ready for GUADECI arrived in Manchester on Thursday the 27th just on time to go to the pre-registration event where I met the rest of the gang and had some dinner, and that was already a great start. Let’s forget about the fact that I lost my badge even before leaving the place, which has to be some type of record (losing the badge before the conference starts, really?), but all in all it was great to meet old friends, as well as some new faces, that evening already.

    Then the 3 core days of GUADEC started. My first impression was that everything (including the accommodation at the university, which was awesome) was very well organized in general, and the venue make it for a perfect place to organize this type of event, so I was already impressed even before things started.

    I attended many talks and all of them were great, but if I had to pick my 5 favourite ones I think those would be the following ones, in no particular order:

    • The GNOME Way, by Allan: A very insightful and inspiring talk, made me think of why we do the things we do, and why it matters. It also kicked an interesting pub conversation with Allan later on and I learned a new word in English (“principled“), so believe me it was great.
    • Keynote: The Battle Over Our Technology, by Karen: I have no words to express how much I enjoyed this talk. Karen was very powerful on stage and the way she shared her experiences and connected them to why Free Software is important did leave a mark.
    • Mutter/gnome-shell state of the union, by Florian and Carlos: As a person who is getting increasingly involved with Endless’s fork of GNOME Shell, I found this one particularly interesting. Also, I found it rather funny at points, specially during “the NVIDIA slide”.
    • Continuous: Past, Present, and Future, by Emmanuele: Sometimes I talk to friends and it strikes me how quickly they dismiss things as CI/CD as “boring” or “not interesting”, which I couldn’t disagree more with. This is very important work and Emmanuele is kicking ass as the build sheriff, so his talk was very interesting to me too. Also, he’s got a nice cat.
    • The History of GNOME, by Jonathan: Truth to be told, Jonathan already did a rather similar talk internally in Endless a while ago, so it was not entirely new to me, but I enjoyed it a lot too because it brought so many memories to my head: starting with when I started with Linux (RedHat 5.2 + GNOME pre-release!), when I used GNOME 1.x at the University and then moved to GNOME 2.x later on… not to mention the funny anecdotes he mentioned (never imagined the phone ringing while sleeping could be a good thing). Perfectly timed for the 20th anniversary of GNOME indeed!

    As I said, I attended other talks too and all were great too, so I’d encourage you to check the schedule and watch the recordings once they are available online, you won’t regret it.

    Closing ceremony

    And the next GUADEC will be in… Almería!

    One thing that surprised me this time was that I didn’t do as much hacking during the conference as in other occasions. Rather than seeing it as a bad thing, I believe that’s a clear indicator of how interesting and engaging the talks were this year, which made it for a perfect return after missing 3 edition (yes, my last GUADEC was in 2013).

    All in all it was a wonderful experience, and I can thank and congratulate the local team and the volunteers who run the conference this year well enough, so here’s is a picture I took where you can see all the people standing up and clapping during the closing ceremony.

    Many thanks and congratulations for all the work done. Seriously.

    The Unconference

    After 3 days of conference, the second part started: “2 days and a bit” (I was leaving on Wednesday morning) of meeting people and hacking in a different venue, where we gathered to work on different topics, plus the occasional high-bandwith meeting in person.

    GUADEC unconferenceAs you might expect, my main interest this time was around GNOME Shell, which is my main duty in Endless right now. This means that, besides trying to be present in the relevant BoFs, I’ve spent quite some time participating of discussions that gathered both upstream contributors and people from different companies (e.g. Endless, Red Hat, Canonical).

    This was extremely helpful and useful for me since, now we have rebased our fork of GNOME Shell 3.22, we’re in a much better position to converge and contribute back to upstream in a more reasonable fashion, as well as to collaborate implementing new features that we already have in Endless but that didn’t make it to upstream yet.

    And talking about those features, I’d like to highlight two things:

    First, the discussion we held with both developers and designers to talk about the new improvements that are being considered for both the window picker and the apps view, where one of the ideas is to improve the apps view by (maybe) adding a new grid of favourite applications that the users could customize, change the order… and so forth.

    According to the designers this proposal was partially inspired by what we have in Endless, so you can imagine I would be quite happy to see such a plan move forward, as we could help with the coding side of things upstream while reducing our diff for future rebases. Thing is, this is a proposal for now so nothing is set in stone yet, but I will definitely be interested in following and participating of the relevant discussions regarding to this.

    Second, as my colleague Georges already vaguely mentioned in his blog post, we had an improvised meeting on Wednesday with one of the designers from Red Hat (Jakub Steiner), where we discussed about a very particular feature upstream has wanted to have for a while and which Endless implemented downstream: management of folders using DnD, right from the apps view.

    This is something that Endless has had in its desktop since the beginning of times, but the implementation relied in a downstream-specific version of folders that Endless OS implemented even before folders were available in the upstream GNOME Shell, so contributing that back would have been… “interesting”. But fortunately, we have now dropped that custom implementation of folders and embraced the upstream solution during the last rebase to 3.22, and we’re in a much better position now to contribute our solution upstream. Once this lands, you should be able to create, modify, remove and use folders without having to open GNOME Software at all, just by dragging and dropping apps on top of other apps and folders, pretty much in a similat fashion compared to how you would do it in a mobile OS these days.

    We’re still in an early stage for this, though. Our current solution in Endless is based on some assumptions and tools that will simply not be the case upstream, so we will have to work with both the designers and the upstream maintainers to make this happen over the next months. Thus, don’t expect anything to land for the next stable release yet, but simply know we’ll be working on it  and that should hopefully make it not too far in the future.

    The Rest

    This GUADEC has been a blast for me, and probably the best and my most favourite edition ever among all those I’ve attended since 2008. Reasons for such a strong statement are diverse, but I think I can mention a few that are clear to me:

    From a personal point of view, I never felt so engaged and part of the community as this time. I don’t know if that has something to do with my recent duties in Endless (e.g. flatpak, GNOME Shell) or with something less “tangible” but that’s the truth. Can’t state it well enough.

    From the perspective of Endless, the fact that 17 of us were there is something to be very excited and happy about, specially considering that I work remotely and only see 4 of my colleagues from the London area on a regular basis (i.e. one day a week). Being able to meet people I don’t regularly see as well as some new faces in person is always great, but having them all together “under the same ceilings” for 6 days was simply outstanding.

    GNOME 20th anniversary dinner

    GNOME 20th anniversary dinner

    Also, as it happened, this year was the celebration of the 20th anniversary of the GNOME project and so the whole thing was quite emotional too. Not to mention that Federico’s birthday happened during GUADEC, which was a more than nice… coincidence? :-) Ah! And we also had an incredible dinner on Saturday to celebrate that, couldn’t certainly be a better opportunity for me to attend this conference!

    Last, a nearly impossible thing happened: despite of the demanding schedule that an event like this imposes (and I’m including our daily visit to the pubs here too), I managed to go running every single day between 5km and 10km, which I believe is the first time it happened in my life. I definitely took my running gear with me to other conferences but this time was the only one I took it that seriously, and also the first time that I joined other fellow GNOME runners in the process, which was quite fun as well.

    Final words

    I couldn’t finish this extremely long post without a brief note to acknowledge and thank all the many people who made this possible this year: the GNOME Foundation and the amazing group of volunteers who helped organize it, the local team who did an outstanding job at all levels (venue, accomodation, events…), my employer Endless for sponsoring my attendance and, of course, all the people who attended the event and made it such an special GUADEC this year.

    Thank you all, and see you next year in Almería!

    Credit to Georges Stavracas

    04 de July de 2017

    Endless OS 3.2 released!

    We just released Endless OS 3.2 to the world after a lot of really hard work from everyone here at Endless, including many important changes and fixes that spread pretty much across the whole OS: from the guts and less visible parts of the core system (e.g. a newer Linux kernel, OSTree and Flatpak improvements, updated libraries…) to other more visible parts including a whole rebase of the GNOME components and applications (e.g. mutter, gnome-settings-daemon, nautilus…), newer and improved “Endless apps” and a completely revamped desktop environment.

    By the way, before I dive deeper into the rest of this post, I’d like to remind you thatEndless OS is a Operating System that you can download for free from our website, so please don’t hesitate to check it out if you want to try it by yourself. But now, even though I’d love to talk in detail about ALL the changes in this release, I’d like to talk specifically about what has kept me busy most of the time since around March: the full revamp of our desktop environment, that is, our particular version of GNOME Shell.

    Endless OS 3.2 as it looks in my laptop right now

    Endless OS 3.2 as it looks in my laptop right now

    If you’re already familiar with what Endless OS is and/or with the GNOME project, you might already know that Endless’s desktop is a forked and heavily modified version of GNOME Shell, but what you might not know is that it was specifically based on GNOME Shell 3.8.

    Yes, you read that right, no kidding: a now 4 years old version of GNOME Shell was alive and kicking underneath the thousands of downstream changes that we added on top of it during all that time to implement the desired user experience for our target users, as we iterated based on tons of user testing sessions, research, design visions… that this company has been working on right since its inception. That includes porting very visible things such as the “Endless button”, the user menu, the apps grid right on top of the desktop, the ability to drag’n’drop icons around to re-organize that grid and easily manage folders (by just dragging apps into/out-of folders), the integrated desktop search (+ additional search providers), the window picker mode… and many other things that are not visible at all, but that are required to deliver a tight and consistent experience to our users.

    Endless button showcasing the new "show desktop" functionality

    Endless button showcasing the new “show desktop” functionality

    Aggregated system indicators and the user menu

    Of course, this situation was not optimal and finally we decided we had found the right moment to tackle this situation in line with the 3.2 release, so I was tasked with leading the mission of “rebasing” our downstream changes on top of a newer shell (more specifically on top of GNOME Shell 3.22), which looked to me like a “hell of a task” when I started, but still I didn’t really hesitate much and gladly picked it up right away because I really did want to make our desktop experience even better, and this looked to me like a pretty good opportunity to do so.

    By the way, note that I say “rebasing” between quotes, and the reason is because the usual approach of taking your downstream patches on top of a certain version of an Open Source project and apply them on top of whatever newer version you want to update to didn’t really work here: the vast amount of changes combined with the fact that the code base has changed quite a bit between 3.8 and 3.22 made that strategy fairly complicated, so in the end we had to opt for a combination of rebasing some patches (when they were clean enough and still made sense) and a re-implementation of the desired functionality on top of the newer base.

    Integrated desktop search

    The integrated desktop search in action

    New implementation for folders in Endless OS (based on upstream’s)

    As you can imagine, and especially considering my fairly limited previous experience with things like mutter, clutter and the shell’s code, this proved to be a pretty difficult thing for me to take on if I’m truly honest. However, maybe it’s precisely because of all those things that, now that it’s released, I look at the result of all these months of hard work and I can’t help but feel very proud of what we achieved in this, pretty tight, time frame: we have a refreshed Endless OS desktop now with new functionality, better animations, better panels, better notifications, better folders (we ditched our own in favour of upstream’s), better infrastructure… better everything!.

    Sure, it’s not perfect yet (no such a thing as “finished software”, right?) and we will keep working hard for the next releases to fix known issues and make it even better, but what we have released today is IMHO a pretty solid 3.2 release that I feel very proud of, and one that is out there now already for everyone to see, use and enjoy, and that is quite an achievement.

    Removing and app by dragging and dropping it into the trash bin

    Now, you might have noticed I used “we” most of the time in this post when referring to the hard work that we did, and that’s because this was not something I did myself alone, not at all. While it’s still true I started working on this mostly on my own and that I probably took on most of the biggest tasks myself, the truth is that several other people jumped in to help with this monumental task tackling a fair amount of important tasks in parallel, and I’m pretty sure we couldn’t have released this by now if not because of the team effort we managed to pull here.

    I’m a bit afraid of forgetting to mention some people, but I’ll try anyway: many thanks to Cosimo Cecchi, Joaquim Rocha, Roddy Shuler, Georges Stavracas, Sam Spilsbury, Will Thomson, Simon Schampijer, Michael Catanzaro and of course the entire design team, who all joined me in this massive quest by taking some time alongside with their other responsibilities to help by tackling several tasks each, resulting on the shell being released on time.

    The window picker as activated from the hot corner (bottom – right)

    Last, before I finish this post, I’d just like to pre-answer a couple of questions that I guess some of you might have already:

    Will you be proposing some of this changes upstream?

    Our intention is to reduce the diff with upstream as much as possible, which is the reason we have left many things from upstream untouched in Endless OS 3.2 (e.g. the date/menu panel) and the reason why we already did some fairly big changes for 3.2 to get closer in other places we previously had our very own thing (e.g. folders), so be sure we will upstream everything we can as far as it’s possible and makes sense for upstream.

    Actually, we have already pushed many patches to the shell and related projects since Endless moved to GNOME Shell a few years ago, and I don’t see any reason why that would change.

    When will Endless OS desktop be rebased again on top of a newer GNOME Shell?

    If anything we learned from this “rebasing” experience is that we don’t want to go through it ever again, seriously :-). It made sense to be based on an old shell for some time while we were prototyping and developing our desktop based on our research, user testing sessions and so on, but we now have a fairly mature system and the current plan is to move on from this situation where we had changes on top of a 4 years old codebase, to a point where we’ll keep closer to upstream, with more frequent rebases from now on.

    Thus, the short answer to that question is that we plan to rebase the shell more frequently after this release, ideally two times a year so that we are never too far away from the latest GNOME Shell codebase.


    And I think that’s all. I’ve already written too much, so if you excuse me I’ll get back to my Emacs (yes, I’m still using Emacs!) and let you enjoy this video of a recent development snapshot of Endless OS 3.2, as created by my colleague Michael Hall a few days ago:


    (Feel free to visit our YouTube channel to check out for more videos like this one)

    Also, quick shameless plug just to remind you that we have an Endless Community website which you can join and use to provide feedback, ask questions or simply to keep informed about Endless. And if real time communication is your thing, we’re also on IRC (#endless on Freenode) and Slack, so I very much encourage you to join us via any of these channels as well if you want.

    Ah! And before I forget, just a quick note to mention that this year I’m going to GUADEC again after a big break (my last one was in Brno, in 2013) thanks to my company, which is sponsoring my attendance in several ways, so feel free to say “hi!” if you want to talk to me about Endless, the shell, life or anything else.

    03 de May de 2017

    WebKitGTK+ remote debugging in 2.18

    WebKitGTK+ has supported remote debugging for a long time. The current implementation uses WebSockets for the communication between the local browser (the debugger) and the remote browser (the debug target or debuggable). This implementation was very simple and, in theory, you could use any web browser as the debugger because all inspector code was served by the WebSockets. I said in theory because in the practice this was not always so easy, since the inspector code uses newer JavaScript features that are not implemented in other browsers yet. The other major issue of this approach was that the communication between debugger and target was not bi-directional, so the target browser couldn’t notify the debugger about changes (like a new tab open, navigation or that is going to be closed).

    Apple abandoned the WebSockets approach a long time ago and implemented its own remote inspector, using XPC for the communication between debugger and target. They also moved the remote inspector handling to JavaScriptCore making it available to debug JavaScript applications without a WebView too. In addition, the remote inspector is also used by Apple to implement WebDriver. We think that this approach has a lot more advantages than disadvantages compared to the WebSockets solution, so we have been working on making it possible to use this new remote inspector in the GTK+ port too. After some refactorings to the code to separate the cross-platform implementation from the Apple one, we could add our implementation on top of that. This implementation is already available in WebKitGTK+ 2.17.1, the first unstable release of this cycle.

    From the user point of view there aren’t many differences, with the WebSockets we launched the target browser this way:

    $ WEBKIT_INSPECTOR_SERVER=127.0.0.1:1234 browser
    

    This hasn’t changed with the new remote inspector. To start debugging we opened any browser and loaded

    http://127.0.0.1:1234

    With the new remote inspector we have to use any WebKitGTK+ based browser and load

    inspector://127.0.0.1:1234

    As you have already noticed, it’s no longer possible to use any web browser, you need to use a recent enough WebKitGTK+ based browser as the debugger. This is because of the way the new remote inspector works. It requires a frontend implementation that knows how to communicate with the targets. In the case of Apple that frontend implementation is Safari itself, which has a menu with the list of remote debuggable targets. In WebKitGTK+ we didn’t want to force using a particular web browser as debugger, so the frontend is implemented as a builtin custom protocol of WebKitGTK+. So, loading inspector:// URLs in any WebKitGTK+ WebView will show the remote inspector page with the list of debuggable targets.

    It looks quite similar to what we had, just a list of debuggable targets, but there are a few differences:

    • A new debugger window is opened when inspector button is clicked instead of reusing the same web view. Clicking on inspect again just brings the window to the front.
    • The debugger window loads faster, because the inspector code is not served by HTTP, but locally loaded like the normal local inspector.
    • The target list page is updated automatically, without having to manually reload it when a target is added, removed or modified.
    • The debugger window is automatically closed when the target web view is closed or crashed.

    How does the new remote inspector work?

    The web browser checks the presence of WEBKIT_INSPECTOR_SERVER environment variable at start up, the same way it was done with the WebSockets. If present, the RemoteInspectorServer is started in the UI process running a DBus service listening in the IP and port provided. The environment variable is propagated to the child web processes, that create a RemoteInspector object and connect to the RemoteInspectorServer. There’s one RemoteInspector per web process, and one debuggable target per WebView. Every RemoteInspector maintains a list of debuggable targets that is sent to the RemoteInspector server when a new target is added, removed or modified, or when explicitly requested by the RemoteInspectorServer.
    When the debugger browser loads an inspector:// URL, a RemoteInspectorClient is created. The RemoteInspectorClient connects to the RemoteInspectorServer using the IP and port of the inspector:// URL and asks for the list of targets that is used by the custom protocol handler to create the web page. The RemoteInspectorServer works as a router, forwarding messages between RemoteInspector and RemoteInspectorClient objects.

    20 de March de 2017

    WebKitGTK+ 2.16

    The Igalia WebKit team is happy to announce WebKitGTK+ 2.16. This new release drastically improves the memory consumption, adds new API as required by applications, includes new debugging tools, and of course fixes a lot of bugs.

    Memory consumption

    After WebKitGTK+ 2.14 was released, several Epiphany users started to complain about high memory usage of WebKitGTK+ when Epiphany had a lot of tabs open. As we already explained in a previous post, this was because of the switch to the threaded compositor, that made hardware acceleration always enabled. To fix this, we decided to make hardware acceleration optional again, enabled only when websites require it, but still using the threaded compositor. This is by far the major improvement in the memory consumption, but not the only one. Even when in accelerated compositing mode, we managed to reduce the memory required by GL contexts when using GLX, by using OpenGL version 3.2 (core profile) if available. In mesa based drivers that means that software rasterizer fallback is never required, so the context doesn’t need to create the software rasterization part. And finally, an important bug was fixed in the JavaScript garbage collector timers that prevented the garbage collection to happen in some cases.

    CSS Grid Layout

    Yes, the future here and now available by default in all WebKitGTK+ based browsers and web applications. This is the result of several years of great work by the Igalia web platform team in collaboration with bloomberg. If you are interested, you have all the details in Manuel’s blog.

    New API

    The WebKitGTK+ API is quite complete now, but there’s always new things required by our users.

    Hardware acceleration policy

    Hardware acceleration is now enabled on demand again, when a website requires to use accelerated compositing, the hardware acceleration is enabled automatically. WebKitGTK+ has environment variables to change this behavior, WEBKIT_DISABLE_COMPOSITING_MODE to never enable hardware acceleration and WEBKIT_FORCE_COMPOSITING_MODE to always enabled it. However, those variables were never meant to be used by applications, but only for developers to test the different code paths. The main problem of those variables is that they apply to all web views of the application. Not all of the WebKitGTK+ applications are web browsers, so it can happen that an application knows it will never need hardware acceleration for a particular web view, like for example the evolution composer, while other applications, especially in the embedded world, always want hardware acceleration enabled and don’t want to waste time and resources with the switch between modes. For those cases a new WebKitSetting hardware-acceleration-policy has been added. We encourage everybody to use this setting instead of the environment variables when upgrading to WebKitGTk+ 2.16.

    Network proxy settings

    Since the switch to WebKit2, where the SoupSession is no longer available from the API, it hasn’t been possible to change the network proxy settings from the API. WebKitGTK+ has always used the default proxy resolver when creating the soup context, and that just works for most of our users. But there are some corner cases in which applications that don’t run under a GNOME environment want to provide their own proxy settings instead of using the proxy environment variables. For those cases WebKitGTK+ 2.16 includes a new UI process API to configure all proxy settings available in GProxyResolver API.

    Private browsing

    WebKitGTK+ has always had a WebKitSetting to enable or disable the private browsing mode, but it has never worked really well. For that reason, applications like Epiphany has always implemented their own private browsing mode just by using a different profile directory in tmp to write all persistent data. This approach has several issues, for example if the UI process crashes, the profile directory is leaked in tmp with all the personal data there. WebKitGTK+ 2.16 adds a new API that allows to create ephemeral web views which never write any persistent data to disk. It’s possible to create ephemeral web views individually, or create ephemeral web contexts where all web views associated to it will be ephemeral automatically.

    Website data

    WebKitWebsiteDataManager was added in 2.10 to configure the default paths on which website data should be stored for a web context. In WebKitGTK+ 2.16 the API has been expanded to include methods to retrieve and remove the website data stored on the client side. Not only persistent data like HTTP disk cache, cookies or databases, but also non-persistent data like the memory cache and session cookies. This API is already used by Epiphany to implement the new personal data dialog.

    Dynamically added forms

    Web browsers normally implement the remember passwords functionality by searching in the DOM tree for authentication form fields when the document loaded signal is emitted. However, some websites add the authentication form fields dynamically after the document has been loaded. In those cases web browsers couldn’t find any form fields to autocomplete. In WebKitGTk+ 2.16 the web extensions API includes a new signal to notify when new forms are added to the DOM. Applications can connect to it, instead of document-loaded to start searching for authentication form fields.

    Custom print settings

    The GTK+ print dialog allows the user to add a new tab embedding a custom widget, so that applications can include their own print settings UI. Evolution used to do this, but the functionality was lost with the switch to WebKit2. In WebKitGTK+ 2.16 a similar API to the GTK+ one has been added to recover that functionality in evolution.

    Notification improvements

    Applications can now set the initial notification permissions on the web context to avoid having to ask the user everytime. It’s also possible to get the tag identifier of a WebKitNotification.

    Debugging tools

    Two new debugged tools are now available in WebKitGTk+ 2.16. The memory sampler and the resource usage overlay.

    Memory sampler

    This tool allows to monitor the memory consumption of the WebKit processes. It can be enabled by defining the environment variable WEBKIT_SMAPLE_MEMORY. When enabled, the UI process and all web process will automatically take samples of memory usage every second. For every sample a detailed report of the memory used by the process is generated and written to a file in the temp directory.

    $ WEBKIT_SAMPLE_MEMORY=1 MiniBrowser 
    Started memory sampler for process MiniBrowser 32499; Sampler log file stored at: /tmp/MiniBrowser7ff2246e-406e-4798-bc83-6e525987aace
    Started memory sampler for process WebKitWebProces 32512; Sampler log file stored at: /tmp/WebKitWebProces93a10a0f-84bb-4e3c-b257-44528eb8f036
    

    The files contain a list of sample reports like this one:

    Timestamp                          1490004807
    Total Program Bytes                1960214528
    Resident Set Bytes                 84127744
    Resident Shared Bytes              68661248
    Text Bytes                         4096
    Library Bytes                      0
    Data + Stack Bytes                 87068672
    Dirty Bytes                        0
    Fast Malloc In Use                 86466560
    Fast Malloc Committed Memory       86466560
    JavaScript Heap In Use             0
    JavaScript Heap Committed Memory   49152
    JavaScript Stack Bytes             2472
    JavaScript JIT Bytes               8192
    Total Memory In Use                86477224
    Total Committed Memory             86526376
    System Total Bytes                 16729788416
    Available Bytes                    5788946432
    Shared Bytes                       1037447168
    Buffer Bytes                       844214272
    Total Swap Bytes                   1996484608
    Available Swap Bytes               1991532544
    

    Resource usage overlay

    The resource usage overlay is only available in Linux systems when WebKitGTK+ is built with ENABLE_DEVELOPER_MODE. It allows to show an overlay with information about resources currently in use by the web process like CPU usage, total memory consumption, JavaScript memory and JavaScript garbage collector timers information. The overlay can be shown/hidden by pressing CTRL+Shit+G.

    We plan to add more information to the overlay in the future like memory cache status.

    10 de February de 2017

    Accelerated compositing in WebKitGTK+ 2.14.4

    WebKitGTK+ 2.14 release was very exciting for us, it finally introduced the threaded compositor to drastically improve the accelerated compositing performance. However, the threaded compositor imposed the accelerated compositing to be always enabled, even for non-accelerated contents. Unfortunately, this caused different kind of problems to several people, and proved that we are not ready to render everything with OpenGL yet. The most relevant problems reported were:

    • Memory usage increase: OpenGL contexts use a lot of memory, and we have the compositor in the web process, so we have at least one OpenGL context in every web process. The threaded compositor uses the coordinated graphics model, that also requires more memory than the simple mode we previously use. People who use a lot of tabs in epiphany quickly noticed that the amount of memory required was a lot more.
    • Startup and resize slowness: The threaded compositor makes everything smooth and performs quite well, except at startup or when the view is resized. At startup we need to create the OpenGL context, which is also quite slow by itself, but also need to create the compositing thread, so things are expected to be slower. Resizing the viewport is the only threaded compositor task that needs to be done synchronously, to ensure that everything is in sync, the web view in the UI process, the OpenGL viewport and the backing store surface. This means we need to wait until the threaded compositor has updated to the new size.
    • Rendering issues: some people reported rendering artifacts or even nothing rendered at all. In most of the cases they were not issues in WebKit itself, but in the graphic driver or library. It’s quite diffilcult for a general purpose web engine to support and deal with all possible GPUs, drivers and libraries. Chromium has a huge list of hardware exceptions to disable some OpenGL extensions or even hardware acceleration entirely.

    Because of these issues people started to use different workarounds. Some people, and even applications like evolution, started to use WEBKIT_DISABLE_COMPOSITING_MODE environment variable, that was never meant for users, but for developers. Other people just started to build their own WebKitGTK+ with the threaded compositor disabled. We didn’t remove the build option because we anticipated some people using old hardware might have problems. However, it’s a code path that is not tested at all and will be removed for sure for 2.18.

    All these issues are not really specific to the threaded compositor, but to the fact that it forced the accelerated compositing mode to be always enabled, using OpenGL unconditionally. It looked like a good idea, entering/leaving accelerated compositing mode was a source of bugs in the past, and all other WebKit ports have accelerated compositing mode forced too. Other ports use UI side compositing though, or target a very specific hardware, so the memory problems and the driver issues are not a problem for them. The imposition to force the accelerated compositing mode came from the switch to using coordinated graphics, because as I said other ports using coordinated graphics have accelerated compositing mode always enabled, so they didn’t care about the case of it being disabled.

    There are a lot of long-term things we can to to improve all the issues, like moving the compositor to the UI (or a dedicated GPU) process to have a single GL context, implement tab suspension, etc. but we really wanted to fix or at least improve the situation for 2.14 users. Switching back to use accelerated compositing mode on demand is something that we could do in the stable branch and it would improve the things, at least comparable to what we had before 2.14, but with the threaded compositor. Making it happen was a matter of fixing a lot bugs, and the result is this 2.14.4 release. Of course, this will be the default in 2.16 too, where we have also added API to set a hardware acceleration policy.

    We recommend all 2.14 users to upgrade to 2.14.4 and stop using the WEBKIT_DISABLE_COMPOSITING_MODE environment variable or building with the threaded compositor disabled. The new API in 2.16 will allow to set a policy for every web view, so if you still need to disable or force hardware acceleration, please use the API instead of WEBKIT_DISABLE_COMPOSITING_MODE and WEBKIT_FORCE_COMPOSITING_MODE.

    We really hope this new release and the upcoming 2.16 will work much better for everybody.

    21 de December de 2016

    Acabemos con los intersticiales

    ¡Interstitials!

    Vaya palabro. “Interstitials”. Intersticiales en español. Anuncios intersticiales; que para la RAE serían aquellos que ocupan los intersticios, pero que, realmente, son anuncios que ocupan toda la pantalla y son especialmente molestos porque,

    • Impiden visualizar la página que queremos ver. Vamos, que molestan, y mucho más, que los pop-ups de toda la vida.
    • Incitan al click fraudulento: La mayor parte de las veces que pulsamos sobre el anuncio, no es para acceder al producto anunciado sino para cerrar el mismo. El botón para cancelarlo suele requerir de una especial destreza (o puntería) del usuario sobre su pantalla multitáctil… a menos que tengas los dedos del tamaño de la cabeza de un alfiler.

    Anuncio Intersticial, imagen de Google

    Pues bien, hace ahora más de 4 meses, Google anunció que iba a penalizar el uso de anuncios intersticiales en las búsquedas desde móvil a partir de enero de 2017, algo que ya está a la vuelta de la esquina.

    Es bueno recordarlo porque, a día de hoy, la mayor parte de las páginas de actualidad en Internet siguen incorporando este tipo de anuncios como si nada, haciendo muy incómoda la lectura, sobre todo en móviles, donde deshacerse del anuncio resulta complicado y por lo que, muchas veces, un servidor decide abandonar el medio de comunicación elegido para irme a uno alternativo que me informe más y me moleste menos.

    Aún me queda la esperanza de que esta penalización del buscador realmente consiga su efecto. Mientras tanto, seguiremos haciendo uso extensivo y abusivo de los bloqueadores de anuncios, pese a los avisos de determinados medios de que eso les hace daño.

    15 de December de 2016

    Thu 2016/Dec/15

    Igalia is hiring. We're currently interested in Multimedia and Chromium developers. Check the announcements for details on the positions and our company.

    10 de November de 2016

    Web Engines Hackfest 2016

    From September 26th to 28th we celebrated at the Igalia HQ the 2016 edition of the Web Engines Hackfest. This year we broke all records and got participants from the three main companies behind the three biggest open source web engines, say Mozilla, Google and Apple. Or course, it was not only them, we had some other companies and ourselves. I was active part of the organization and I think we not only did not get any complain but people were comfortable and happy around.

    We had several talks (I included the slides and YouTube links):

    We had lots and lots of interesting hacking and we also had several breakout sessions:

    • WebKitGTK+ / Epiphany
    • Servo
    • WPE / WebKit for Wayland
    • Layout Models (Grid, Flexbox)
    • WebRTC
    • JavaScript Engines
    • MathML
    • Graphics in WebKit

    What I did during the hackfest was working with Enrique and Žan to advance on reviewing our downstream implementation of our GStreamer based of Media Source Extensions (MSE) in order to land it as soon as possible and I can proudly say that we did already (we didn’t finish at the hackfest but managed to do it after it). We broke the bots and pissed off Michael and Carlos but we managed to deactivate it by default and continue working on it upstream.

    So summing up, from my point of view and it is not only because I was part of the organization at Igalia, based also in other people’s opinions, I think the hackfest was a success and I think we will continue as we were or maybe growing a bit (no spoilers!).

    Finally I would like to thank our gold sponsors Collabora and Igalia and our silver sponsor Mozilla.

    03 de October de 2016

    Modernizing blam's autotools (or shaving the yak to move out from GoogleReader...)

    Before focusing my spare time completely on the GSoC* (as I have mentoring responsibilities this year \o/ ), I wanted to solve a problem that cannot wait after July...

    Yes, I've been victim of Google's cuts too... And I was wondering, where should I move? Feedly? ThingyBob? Well, I shouldn't make the same mistake twice, right?

    Actually, some time ago I was using a desktop app to avoid relying on software that I cannot control (yes, vendor lock-in, the most important thing that open source tries to solve, right?): Thunderbird. But somehow the convenience of a web app (that I can access from any computer) and the hassle of using my mail client for RSS reading made me move to the web.

    I should be able to find a replacement that no company or individual can "take down", and which feels less clunky than Thunderbird for reading RSS. So, enter blam (in the future I'll figure out how to sync its state between computers, maybe using SparkleShare?, to achieve that same convenience that a web-app provides), that Gnome app that has strangely managed to not catch my eye until now...

    Well, maybe because if I install it from debian sid and I try to import my very first RSS feed from my GoogleReader list it doesn't work? Well, apparently it is a bug that is already fixed upstream, thanks to Carlos which has modernized the way that the program deals with XML and serialization.

    Then I went ahead and tried to compile master myself... and guess what, the autogen.sh execution fails. Here the yak shaving begins, when I feel like this when trying to fix the autotools stuff:


    Fortunately, after some tinkering (and some copy&paste from banshee's build scripts), I managed to fix the problem, and also modernized a bit some things (like using the brand new ".ac" extension instead of ".in" for the configure script, or using properly the AC_INIT and AM_AUTOMAKE_INIT macros,...).

    Anyway, the real thing to highlight here is that while I was fixing this stuff and pushing to the repository...


    ... I saw some really good stuff committed by Carlos: using the new .NET 4.5 C# async patterns to get rid of those ugly callbacks! Kudos to him.

    And if you're willing to help more with our autotools housekeeping, please do, I still feel this autogen.sh is way too long and needs some ironing.

    * And if you're wondering what's up with GSoC (aka Google Summer of Code):

    • I had Nicholas Little lined up to work on Rygel+Banshee integration, but sadly he couldn't apply due to work commitments (hopefully he will still work with me on it in his spare time).
    • I had Rashid Khan lined up to work on Cydin+Banshee integration, but sadly there were not enough GSoC spots for him :( (fortuntately he told me he still wanted to work on it with me in his spare time).
    • I had Tomasz MaczyÅ„ski lined up to work on Banshee integration with more REST APIs, and fortunately he was selected! So expect some nice FanArt.TV and SongKick plugins soon!


    20 de September de 2016

    WebKitGTK+ 2.14

    These six months has gone so fast and here we are again excited about the new WebKitGTK+ stable release. This is a release with almost no new API, but with major internal changes that we hope will improve all the applications using WebKitGTK+.

    The threaded compositor

    This is the most important change introduced in WebKitGTK+ 2.14 and what kept us busy for most of this release cycle. The idea is simple, we still render everything in the web process, but the accelerated compositing (all the OpenGL calls) has been moved to a secondary thread, leaving the main thread free to run all other heavy tasks like layout, JavaScript, etc. The result is a smoother experience in general, since the main thread is no longer busy rendering frames, it can process the JavaScript faster improving the responsiveness significantly. For all the details about the threaded compositor, read Yoon’s post here.

    So, the idea is indeed simple, but the implementation required a lot of important changes in the whole graphics stack of WebKitGTK+.

    • Accelerated compositing always enabled: first of all, with the threaded compositor the accelerated mode is always enabled, so we no longer enter/exit the accelerating compositing mode when visiting pages depending on whether the contents require acceleration or not. This was the first challenge because there were several bugs related to accelerating compositing being always enabled, and even missing features like the web view background colors that didn’t work in accelerated mode.
    • Coordinated Graphics: it was introduced in WebKit when other ports switched to do the compositing in the UI process. We are still doing the compositing in the web process, but being in a different thread also needs coordination between the main thread and the compositing thread. We switched to use coordinated graphics too, but with some modifications for the threaded compositor case. This is the major change in the graphics stack compared to the previous one.
    • Adaptation to the new model: finally we had to adapt to the threaded model, mainly due to the fact that some tasks that were expected to be synchronous before became asyncrhonous, like resizing the web view.

    This is a big change that we expect will drastically improve the performance of WebKitGTK+, especially in embedded systems with limited resources, but like all big changes it can also introduce new bugs or issues. Please, file a bug report if you notice any regression in your application. If you have any problem running WebKitGTK+ in your system or with your GPU drivers, please let us know. It’s still possible to disable the threaded compositor in two different ways. You can use the environment variable WEBKIT_DISABLE_COMPOSITING_MODE at runtime, but this will disable accelerated compositing support, so websites requiring acceleration might not work. To disable the threaded compositor and bring back the previous model you have to recompile WebKitGTK+ with the option ENABLE_THREADED_COMPOSITOR=OFF.

    Wayland

    WebKitGTK+ 2.14 is the first release that we can consider feature complete in Wayland. While previous versions worked in Wayland there were two important features missing that made it quite annoying to use: accelerated compositing and clipboard support.

    Accelerated compositing

    More and more websites require acceleration to properly work and it’s now a requirement of the threaded compositor too. WebKitGTK+ has supported accelerated compositing for a long time, but the implementation was specific to X11. The main challenge is compositing in the web process and sending the results to the UI process to be rendered on the actual screen. In X11 we use an offscreen redirected XComposite window to render in the web process, sending the XPixmap ID to the UI process that renders the window offscreen contents in the web view and uses XDamage extension to track the repaints happening in the XWindow. In Wayland we use a nested compositor in the UI process that implements the Wayland surface interface and a private WebKitGTK+ protocol interface to associate surfaces in the UI process to the web pages in the web process. The web process connects to the nested Wayland compositor and creates a new surface for the web page that is used to render accelerated contents. On every swap buffers operation in the web process, the nested compositor in the UI process is automatically notified through the Wayland surface protocol, and  new contents are rendered in the web view. The main difference compared to the X11 model, is that Wayland uses EGL in both the web and UI processes, so what we have in the UI process in the end is not a bitmap but a GL texture that can be used to render the contents to the screen using the GPU directly. We use gdk_cairo_draw_from_gl() when available to do that, falling back to using glReadPixels() and a cairo image surface for older versions of GTK+. This can make a huge difference, especially on embedded devices, so we are considering to use the nested Wayland compositor even on X11 in the future if possible.

    Clipboard

    The WebKitGTK+ clipboard implementation relies on GTK+, and there’s nothing X11 specific in there, however clipboard was read/written directly by the web processes. That doesn’t work in Wayland, even though we use GtkClipboard, because Wayland only allows clipboard operations between compositor clients, and web processes are not Wayland clients. This required to move the clipboard handling from the web process to the UI process. Clipboard handling is now centralized in the UI process and clipboard contents to be read/written are sent to the different WebKit processes using the internal IPC.

    Memory pressure handler

    The WebKit memory pressure handler is a monitor that watches the system memory (not only the memory used by the web engine processes) and tries to release memory under low memory conditions. This is quite important feature in embedded devices with memory limitations. This has been supported in WebKitGTK+ for some time, but the implementation is based on cgroups and systemd, that is not available in all systems, and requires user configuration. So, in practice nobody was actually using the memory pressure handler. Watching system memory in Linux is a challenge, mainly because /proc/meminfo is not pollable, so you need manual polling. In WebKit, there’s a memory pressure handler on every secondary process (Web, Plugin and Network), so waking up every second to read /proc/meminfo from every web process would not be acceptable. This is not a problem when using cgroups, because the kernel interface provides a way to poll an EventFD to be notified when memory usage is critical.

    WebKitGTK+ 2.14 has a new memory monitor, used only when cgroups/systemd is not available or configured, based on polling /proc/meminfo to ensure the memory pressure handler is always available. The monitor lives in the UI process, to ensure there’s only one process doing the polling, and uses a dynamic poll interval based on the last system memory usage to read and parse /proc/meminfo in a secondary thread. Once memory usage is critical all the secondary processes are notified using an EventFD. Using EventFD for this monitor too, not only is more efficient than using a pipe or sending an IPC message, but also allows us to keep almost the same implementation in the secondary processes that either monitor the cgroups EventFD or the UI process one.

    Other improvements and bug fixes

    Like in all other major releases there are a lot of other improvements, features and bug fixes. The most relevant ones in WebKitGTK+ 2.14 are:

    • The HTTP disk cache implements speculative revalidation of resources.
    • The media backend now supports video orientation.
    • Several bugs have been fixed in the media backend to prevent deadlocks when playing HLS videos.
    • The amount of file descriptors that are kept open has been drastically reduced.
    • Fix the poor performance with the modesetting intel driver and DRI3 enabled.

    11 de August de 2016

    Going to GUADEC 2016

    I am at the airport getting ready to board beginning my trip to Karlsruhe for GUADEC 2016. I’ll be there with my colleague Michael Catanzaro.Please talk to us if you are interested in browsers or Igalia.

    22 de March de 2016

    WebKitGTK+ 2.12

    We did it again, the Igalia WebKit team is pleased to announce a new stable release of WebKitGTK+, with a bunch of bugs fixed, some new API bits and many other improvements. I’m going to talk here about some of the most important changes, but as usual you have more information in the NEWS file.

    FTL

    FTL JIT is a JavaScriptCore optimizing compiler that was developed using LLVM to do low-level optimizations. It’s been used by the Mac port since 2014 but we hadn’t been able to use it because it required some patches for LLVM to work on x86-64 that were not included in any official LLVM release, and there were also some crashes that only happened in Linux. At the beginning of this release cycle we already had LLVM 3.7 with all the required patches and the crashes had been fixed as well, so we finally enabled FTL for the GTK+ port. But in the middle of the release cycle Apple surprised us announcing that they had the new FTL B3 backend ready. B3 replaces LLVM and it’s entirely developed inside WebKit, so it doesn’t require any external dependency. JavaScriptCore developers quickly managed to make B3 work on Linux based ports and we decided to switch to B3 as soon as possible to avoid making a new release with LLVM to remove it in the next one. I’m not going to enter into the technical details of FTL and B3, because they are very well documented and it’s probably too boring for most of the people, the key point is that it improves the overall JavaScript performance in terms of speed.

    Persistent GLib main loop sources

    Another performance improvement introduced in WebKitGTK+ 2.12 has to do with main loop sources. WebKitGTK+ makes an extensive use the GLib main loop, it has its own RunLoop abstraction on top of GLib main loop that is used by all secondary processes and most of the secondary threads as well, scheduling main loop sources to send tasks between threads. JavaScript timers, animations, multimedia, the garbage collector, and many other features are based on scheduling main loop sources. In most of the cases we are actually scheduling the same callback all the time, but creating and destroying the GSource each time. We realized that creating and destroying main loop sources caused an overhead with an important impact in the performance. In WebKitGTK+ 2.12 all main loop sources were replaced by persistent sources, which are normal GSources that are never destroyed (unless they are not going to be scheduled anymore). We simply use the GSource ready time to make them active/inactive when we want to schedule/stop them.

    Overlay scrollbars

    GNOME designers have requested us to implement overlay scrollbars since they were introduced in GTK+, because WebKitGTK+ based applications didn’t look consistent with all other GTK+ applications. Since WebKit2, the web view is no longer a GtkScrollable, but it’s scrollable by itself using native scrollbars appearance or the one defined in the CSS. This means we have our own scrollbars implementation that we try to render as close as possible to the native ones, and that’s why it took us so long to find the time to implement overlay scrollbars. But WebKitGTK+ 2.12 finally implements them and are, of course, enabled by default. There’s no API to disable them, but we honor the GTK_OVERLAY_SCROLLING environment variable, so they can be disabled at runtime.

    But the appearance was not the only thing that made our scrollbars inconsistent with the rest of the GTK+ applications, we also had a different behavior regarding the actions performed for mouse buttons, and some other bugs that are all fixed in 2.12.

    The NetworkProcess is now mandatory

    The network process was introduced in WebKitGTK+ since version 2.4 to be able to use multiple web processes. We had two different paths for loading resources depending on the process model being used. When using the shared secondary process model, resources were loaded by the web process directly, while when using the multiple web process model, the web processes sent the requests to the network process for being loaded. The maintenance of this two different paths was not easy, with some bugs happening only when using one model or the other, and also the network process gained features like the disk cache that were not available in the web process. In WebKitGTK+ 2.12 the non network process path has been removed, and the shared single process model has become the multiple web process model with a limit of 1. In practice it means that a single web process is still used, but the network happens in the network process.

    NPAPI plugins in Wayland

    I read it in many bug reports and mailing lists that NPAPI plugins will not be supported in wayland, so things like http://extensions.gnome.org will not work. That’s not entirely true. NPAPI plugins can be windowed or windowless. Windowed plugins are those that use their own native window for rendering and handling events, implemented in X11 based systems using XEmbed protocol. Since Wayland doesn’t support XEmbed and doesn’t provide an alternative either, it’s true that windowed plugins will not be supported in Wayland. Windowless plugins don’t require any native window, they use the browser window for rendering and events are handled by the browser as well, using X11 drawable and X events in X11 based systems. So, it’s also true that windowless plugins having a UI will not be supported by Wayland either. However, not all windowless plugins have a UI, and there’s nothing X11 specific in the rest of the NPAPI plugins API, so there’s no reason why those can’t work in Wayland. And that’s exactly the case of http://extensions.gnome.org, for example. In WebKitGTK+ 2.12 the X11 implementation of NPAPI plugins has been factored out, leaving the rest of the API implementation common and available to any window system used. That made it possible to support windowless NPAPI plugins with no UI in Wayland, and any other non X11 system, of course.

    New API

    And as usual we have completed our API with some new additions:

     

    19 de March de 2016

    ¡Cifremos la web!

    Qué es Let’s Encrypt

    Es bien sabido que conviene cifrar siempre que podamos, y la web no lo es menos. De hecho, hace tiempo que Google fomenta el uso de web cifrada amenazando con penalizar aquellas que no lo estén.

    El caso es que, para tener una web cifrada, tenemos dos problemas: por un lado, comprar un certificado a una entidad certificadora reconocida, y por otro, renovarlo periódicamente (lo que suele tener un coste también). Desde hace tiempo, iniciativas como CaCert, intentaron crear una autoridad certificadora libre de costes para sus usuarios y gestionada por la comunidad, de la que, pese a ser yo mismo uno de sus notarios, tengo que decir que su éxito ha sido siempre escaso, entre otras razones porque lleva ya bastantes años con nosotros y aún no es reconocida por la mayoría de los navegadores.

    Por otro lado, la renovación tiene otro problema: hay que estar pendiente de ello. Para resolver ambos problemas, ha surgido una iniciativa, llamada Let’s Encrypt.

    Let's Encrypt

    Lo primero que vemos es que la reconocen todos los navegadores. El coste de conseguir esto debe estar cubierto por la multitud de sponsors que tiene el proyecto.

    En segundo lugar, utiliza un desafío, llamado ACME, para verificar que la web que queremos cifrar es la dueña del dominio a usar (por ejemplo, https://dramor.net/). Con esta verificación, se pueden emitir sin problemas, certificados sin coste (al ser generados automáticamente), siempre que no necesitemos otras características como la validación de identidad.

    Veremos que con este proyecto, se pueden conseguir certificados de sitio web de forma bastante sencilla. Por ejemplo, veamos cómo se resuelve para Nginx.

    Implementación en los sitios de DrAmor.net

    Antes de nada, decir que no voy a dar la configuración completa de nginx, entendiendo que el lector sabrá de lo que hablo, o podrá ayudarse de los tutoriales mencionados en este artículo.

    En general, lo que veremos nos vale en cualquier Linux o Unix que pueda ejecutar git, python y nginx. Lo primero que haremos es bajarnos el repositorio de Let’s Encrypt:

    $ git clone https://github.com/letsencrypt/letsencrypt

    Este repositorio trae plugins para varios servidores web, pero como Nginx no está aun bien soportado, debemos usar el plugin webroot.

    Hemos seguido la documentación oficial y el tutorial de Digital Ocean para hacernos una idea, aunque los pasos seguidos fueron, en primer lugar crear un fichero de configuración llamado /etc/letsencrypt/letsencrypt-dramor.net.ini:

    rsa-key-size = 4096
    
    email = Direccion-email-valida@gmail.com
    
    webroot-map = {"dramor.net,www.dramor.net":"/var/www/dramor.net", "blog.dramor.net,www.blog.dramor.net":"/var/www/dramor_blog", "home.dramor.net":"/var/www/dramor_home"}

    Por supuesto, el e-mail debe ser correcto. Lo oculto con el ánimo de evitar un poco el spam, claro. Y el webroot-map no es más que un formato json donde especifica los dominios a firmar, junto con la carpeta raíz de cada sitio, que debe incluir una subcarpeta publicada por nginx, para poder ejecutar el desafío ACME.

    La carpeta a publicar en nginx se debe llamar .well-known. Por ejemplo, la podemos publicar en los sitios virtuales nginx con:

    location ~ /.well-known {
        allow all;
    }

    Una vez configurado nginx, ya podemos generar los certificados:

    $ cd letsencrypt
    
    $ sudo ./letsencrypt-auto certonly -a webroot --renew-by-default --config /etc/letsencrypt/letsencrypt-dramor.net.ini
    Checking for new version...
    Requesting root privileges to run letsencrypt...
       sudo /home/jjamor/.local/share/letsencrypt/bin/letsencrypt certonly -a webroot --renew-by-default --config /etc/letsencrypt/letsencrypt-dramor.net.ini
    
    IMPORTANT NOTES:
     - Congratulations! Your certificate and chain have been saved at
       /etc/letsencrypt/live/dramor.net/fullchain.pem. Your cert will
       expire on 2016-06-16. To obtain a new version of the certificate in
       the future, simply run Let's Encrypt again.
     - If you like Let's Encrypt, please consider supporting our work by:
    
       Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
       Donating to EFF:                    https://eff.org/donate-le

    La primera vez creará un virtualenv de la versión de Python requerida, y actualizará algunos paquetes. Una vez generados los certificados, se instalan en: /etc/letsencrypt/live/dramor.net. Si dio algún error, debemos revisar que esté accesible .well-known en los sitios virtuales de nginx, y que los DNS del dominio deseado apunten correctamente al servidor web, entre otras cosas. Los mensajes de error siempre son lo suficientemente descriptivos como para no requerir más explicación. Por ejemplo, nos puede avisar de que algún nombre DNS del dominio a asegurar no resuelve correctamente el registro A o CNAME, o no apunta a nuestro servidor.

    Usar los certificados en Nginx ya es cosa de usar las líneas de configuración de nginx similares a:

    ssl_certificate /etc/letsencrypt/live/dramor.net/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/dramor.net/privkey.pem;

    Renovación automática

    Los certificados así generados tienen una vigencia de solo tres meses. Pero renovarlos consiste simplemente en volver a ejecutar el desafío ACME.

    Para ello, solo hay que llamar a letsencrypt-auto con otro parámetro:

    $ ./letsencrypt-auto renew

    Podemos hacer un script a ejecutar periódicamente (por ejemplo, un cron semanal), que realice esta acción y a continuación ejecute un reload de nginx. De este modo, podemos olvidarnos de las renovaciones: tendrán lugar cuando hagan falta.

    Conviene probar todos los scripts antes de automatizarlo con el cron. Por ejemplo, si nada más generar los certificados intentamos renovarlos:

    $ ./letsencrypt-auto renew
    Checking for new version...
    Requesting root privileges to run letsencrypt...
       sudo /home/jjamor/.local/share/letsencrypt/bin/letsencrypt renew
    Processing /etc/letsencrypt/renewal/dramor.net.conf
    
    The following certs are not due for renewal yet:
      /etc/letsencrypt/live/dramor.net/fullchain.pem (skipped)
    No renewals were attempted.

    Podemos forzar la renovación para verificar que funciona, con:

    $ ./letsencrypt-auto renew --force-renewal

    Por último, indicar que si vamos a hacer muchas pruebas, mejor generemos los certificados con la opción staging, para que que se generen certificados no válidos. Si hacemos muchas pruebas seguidas, alcanzaremos fácilmente el límite diario y no podremos generar más hasta el día siguiente.

    Si hemos hecho todo correctamente, podremos navegar por el sitio web con cifrado; y si queremos, podemos comprobar la calidad del mismo aquí: SSLLabs. Conseguir una buena puntuación ya es cosa de seguir determinadas buenas prácticas con el servidor web, que no cubriré aquí.

    26 de February de 2016

    Über latest Media Source Extensions improvements in WebKit with GStreamer

    In this post I am going to talk about the implementation of the Media Source Extensions (known as MSE) in the WebKit ports that use GStreamer. These ports are WebKitGTK+, WebKitEFL and WebKitForWayland, though only the latter has the latest work-in-progress implementation. Of course we hope to upstream WebKitForWayland soon and with it, this backend for MSE and the one for EME.

    My colleague Enrique at Igalia wrote a post about this about a week ago. I recommend you read it before continuing with mine to understand the general picture and the some of the issues that I managed to fix on that implementation. Come on, go and read it, I’ll wait.

    One of the challenges here is something a bit unnatural in the GStreamer world. We have to process the stream information and then make some metadata available to the JavaScript app before playing instead of just pushing everything to a playing pipeline and being happy. For this we created the AppendPipeline, which processes the data and extracts that information and keeps it under control for the playback later.

    The idea of the our AppendPipeline is to put a data stream into it and get it processed at the other side. It has an appsrc, a demuxer (qtdemux currently) and an appsink to pick up the processed data. Something tricky of the spec is that when you append data into the SourceBuffer, that operation has to block it and prevent with errors any other append operation while the current is ongoing, and when it finishes, signal it. Our main issue with this is that the the appends can contain any amount of data from headers and buffers to only headers or just partial headers. Basically, the information can be partial.

    First I’ll present again Enrique’s AppendPipeline internal state diagram:

    First let me explain the easiest case, which is headers and buffers being appended. As soon as the process is triggered, we move from Not started to Ongoing, then as the headers are processed we get the pads at the demuxer and begin to receive buffers, which makes us move to Sampling. Then we have to detect that the operation has ended and move to Last sample and then again to Not started. If we have received only headers we will not move to Sampling cause we will not receive any buffers but we still have to detect this situation and be able to move to Data starve and then again to Not started.

    Our first approach was using two different timeouts, one to detect that we should move from Ongoing to Data starve if we did not receive any buffer and another to move from Sampling to Last sample if we stopped receiving buffers. This solution worked but it was a bit racy and we tried to find a less error prone solution.

    We tried then to use custom downstream events injected from the source and at the moment they were received at the sink we could move from Sampling to Last sample or if only headers were injected, the pads were created and we could move from Ongoing to Data starve. It took some time and several iterations to fine tune this but we managed to solve almost all cases but one, which was receiving only partial headers and no buffers.

    If the demuxer received partial headers and no buffers it stalled and we were not receiving any pads or any event at the output so we could not tell when the append operation had ended. Tim-Philipp gave me the idea of using the need-data signal on the source that would be fired when the demuxer ran out of useful data. I realized then that the events were not needed anymore and that we could handle all with that signal.

    The need-signal is fired sometimes when the pipeline is linked and also when the the demuxer finishes processing data, regardless the stream contains partial headers, complete headers or headers and buffers. It works perfectly once we are able to disregard that first signal we receive sometimes. To solve that we just ensure that at least one buffer left the appsrc with a pad probe so if we receive the signal before any buffer was detected at the probe, it shall be disregarded to consider that the append has finished. Otherwise, if we have seen already a buffer at the probe we can consider already than any need-data signal means that the processing has ended and we can tell the JavaScript app that the append process has ended.

    Both need-data signal and probe information come in GStreamer internal threads so we could use mutexes to overcome any race conditions. We thought though that deferring the operations to the main thread through the pipeline bus was a better idea that would create less issues with race conditions or deadlocks.

    To finish I prefer to give some good news about performance. We use mainly the YouTube conformance tests to ensure our implementation works and I can proudly say that these changes reduced the time of execution in half!

    That’s all folks!