Planeta GNOME Hispano
La actividad Hispana de GNOME 24 x 7

05 de January de 2020

Introducing geewallet

Version 0.4.2.187 of geewallet has just been published to the snap store! You can install it by looking for its name in the store or by installing it from the command line with `snap install geewallet`. It features a very simplistic and minimalistic UI/UX. Nothing very fancy, especially because it has a single codebase that targets many (potential) platforms, e.g. you can also find it in the Android App Store.

What was my motivation to create geewallet in the first place, around 2 years ago? Well, I was very excited about the “global computing platform” that Ethereum was promising. At the time, I thought it would be like the best replacement of Namecoin: decentralised naming system, but not just focusing on this aspect, but just bringing Turing-completeness so that you can build whatever you want on top of it, not just a key-value store. So then, I got ahold of some ethers to play with the platform. But by then, I didn’t find any wallet that I liked, especially when considering security. Most people were copy+pasting their private keys into a website (!) called MyEtherWallet. Not only this idea was terrifying (since you had to trust not just the security skills of the sysadmin who was in charge of the domain&server, but also that the developers of the software don’t turn rogue…), it was even worse than that, it was worse than using a normal hot wallet. And what I wanted was actually a cold wallet, a wallet that could run in an offline device, to make sure hacking it would be impossible (not faraday-cage-impossible, but reasonably impossible).

So there I did it, I created my own wallet.

After some weeks, I added bitcoin support on it thanks to the library NBitcoin (good work Nicholas!). After some months, I added a cross-platform UI besides the first archaic command-line frontend. These days it looks like this:



What was my motivation to make geewallet a brain wallet? Well, at the time (and maybe nowadays too, before I unveil this project at least), the only decent brain wallet out there that seemed sufficiently secure (against brute force attacks) was WarpWallet, from the Keybase company. If you don’t believe in their approach, they even have placed a bounty in a decently small passphrase (so if you ever think that this kind of wallet would be hacked, you would be certainly safe to think that any cracker would target this bounty first, before thinking of you). The worst of it, again, was that to be able to use it you had again to use a web interface, so you had the double-trust problem again. Now geewallet brings the same WarpWallet seed generation algorithm (backed by unit tests of course) but on a desktop/mobile approach, so that you can own the hardware where the seed is generated. No need to write anymore long seeds of random words in pieces of paper: your mind is the limit! (And of course geewallet will warn the user in case the passphrase is too short and simple: it even detects if all the words belong to the dictionary, to deter low entropy, from the human perspective.)

Why did I add support for Litecoin and Ethereum Classic to the wallet? First, let me tell you that bitcoin and ethereum, as technological innovations and network effects, are very difficult to beat. And in fact, I’m not a fan of the proliferation of dubious portrayed awesome new coins/tokens that claim to be as efficient and scalable as these first two. They would need not only to beat the network effect when it comes to users, but also developers (all the best cryptographers are working in Bitcoin and Ethereum technologies). However, Litecoin and Ethereum-Classic are so similar to Bitcoin and Ethereum, respectively, that adding support for them was less than a day’s work. And they are not completely irrelevant: Litecoin may bring zero-knowledge proofs in an upcoming update soon (plus, its fees are lower today, so it’s an alternative cheaper testnet with real value); and Ethereum-Classic has some inherent characteristics that may make it more decentralised than Ethereum in the long run (governance not following any cult of personality, plus it will remain as a Turing-complete platform on top of Proof Of Work, instead of switching to Proof of Stake; to understand why this is important, I recommend you to watch this video).

Another good reason of why I started something like this from scratch is because I wanted to use F# in a real open source project. I had been playing with it for a personal (private) project 2 years before starting this one, so I wanted to show the world that you can build a decent desktop app with simple and not too opinionated/academic functional programming. It reuses all the power of the .NET platform: you get debuggers, you can target mobile devices, you get immutability by default; all three in one, in this decade, at last. (BTW, everything is written in F#, even the build scripts.)

What’s the roadmap of geewallet? The most important topics I want to cover shortly are three:
  • Make it even more user friendly: blockchain addresses are akin to the numeric IP addresses of the early 80s when DNS still didn’t exist. We plan to use either ENS or IPNS or BNS or OpenCAP so that people can identify recipients much more easily.
  • Implement Layer2 technologies: we’re already past the proof of concept phase. We have branches that can open channels. The promise of these technologies is instantaneous transactions (no waits!) and ridiculous (if not free) fees.
  • Switch the GTK Xamarin.Forms driver to work with the new “GtkSharp” binding under the hood, which doesn’t require glue libraries. (I’ve had quite a few nightmares with native dependencies/libs when building the sandboxed snap package!)
With less priority:
  • Integrate with some Rust projects: MimbleWimble(Grin) lib, the distributed COMIT project for trustless atomic swaps, or other Layer2-related ones such as rust-lightning.
  • Cryptography work: threshold keys or deniable encryption (think "duress" passwords).
  • NFC support (find recipients without QR codes!).
  • Tizen support (watches!).
  • Acceptance testing via UI Selenium tests (look up the Uno Platform).

Areas where I would love contributions from the community:
  • Flatpak support: unfortunately I haven’t had time to look at this sandboxing technology, but it shouldn’t be too hard to do, especially considering that there’s already a Mono-based project that supports it: SparkleShare.
  • Ubuntu packaging: there’s a patch blocked on some Ubuntu bug that makes the wallet (or any .NET app these days, as it affects the .NET package manager: nuget) not build in Ubuntu 19.10. If this patch is not merged soon, the next LTS of Ubuntu will have this bug :( As far as I understand, what needs to be solved is this issue so that the latest hotfixes are bundled. (BTW I have to thank Timotheus Pokorra, the person in charge to package Mono in Fedora, for his help on this matter so far.)
  • GNOME community: I’m in search for a home for this project. I don’t like that it lives in my GitLab username, because it’s not easy to find. One of the reasons I’ve used GitLab is because I love the fact that being open source, many communities are adopting this infrastructure, like Debian and GNOME. That’s why I’ve used as a bug tracker, for merge requests and to run CI jobs. This means that it should be easy to migrate to GNOME’s GitLab, isn’t it? There are unmaintained projects (e.g. banshee, which I couldn’t continue maintaining due to changes in life priorities...) already hosted there, so maybe it’s not much to ask if I could host a maintained one? It's probably the first Gtk-based wallet out there.

And just in case I wasn't clear:
  • Please don’t ask me to add support for your favourite %coin% or <token>.
  • If you want to contribute, don’t ask me what to work on, just think of your personal itch you want to scratch and discuss it with me filing a GitLab issue. If you’re a C# developer, I wrote a quick F# tutorial for you.
  • Thanks for reading up until here! It’s my pleasure to write about this project.

I'm excited about the world of private-key management. I think we can do much better than what we have today: most people think of hardware wallets to be unhackable or cold storage, but most of them are used via USB or Bluetooth! Which means they are not actually cold storage, so software wallets with offline-support (also called air-gapped) are more secure! I think that eventually these tools will even merge with other ubiquitous tools with which we’re more familiar today: password managers!

You can follow the project on twitter (yes I promise I will start using this platform to publish updates).

PS: If you're still not convinced about these technologies or if you didn't understand that PoW video I posted earlier, I recommend you to go back to basics by watching this other video produced by a mathematician educator which explains it really well.

23 de December de 2019

End of the year Update: 2019 edition

It’s the end of December and it seems that yet another year has gone by, so I figured that I’d write an EOY update to summarize my main work at Igalia as part of our Chromium team, as my humble attempt to make up for the lack of posts in this blog during this year.

I did quit a few things this year, but for the purpose of this blog post I’ll focus on what I consider the most relevant ones: work on the Servicification and the Blink Onion Soup projects, the migration to the new Mojo APIs and the BrowserInterfaceBroker, as well as a summary of the conferences I attended, both as a regular attendee and a speaker.

But enough of an introduction, let’s dive now into the gory details…

Servicification: migration to the Identity service

As explained in my previous post from January, I’ve started this year working on the Chromium Servicification (s13n) Project. More specifically, I joined my team mates in helping with the migration to the Identity service by updating consumers of several classes from the sign-in component to ensure they now use the new IdentityManager API instead of directly accessing those other lower level APIs.

This was important because at some point the Identity Service will run in a separate process, and a precondition for that to happen is that all access to sign-in related functionality would have to go through the IdentityManager, so that other process can communicate with it directly via Mojo interfaces exposed by the Identity service.

I’ve already talked long enough in my previous post, so please take a look in there if you want to know more details on what that work was exactly about.

The Blink Onion Soup project

Interestingly enough, a bit after finishing up working on the Identity service, our team dived deep into helping with another Chromium project that shared at least one of the goals of the s13n project: to improve the health of Chromium’s massive codebase. The project is code-named Blink Onion Soup and its main goal is, as described in the original design document from 2015, to “simplify the codebase, empower developers to implement features that run faster, and remove hurdles for developers interfacing with the rest of the Chromium”. There’s also a nice slide deck from 2016’s BlinkOn 6 that explains the idea in a more visual way, if you’re interested.


“Layers”, by Robert Couse-Baker (CC BY 2.0)

In a nutshell, the main idea is to simplify the codebase by removing/reducing the several layers of located between Chromium and Blink that were necessary back in the day, before Blink was forked out of WebKit, to support different embedders with their particular needs (e.g. Epiphany, Chromium, Safari…). Those layers made sense back then but these days Blink’s only embedder is Chromium’s content module, which is the module that Chrome and other Chromium-based browsers embed to leverage Chromium’s implementation of the Web Platform, and also where the multi-process and sandboxing architecture is implemented.

And in order to implement the multi-process model, the content module is split in two main parts running in separate processes, which communicate among each other over IPC mechanisms: //content/browser, which represents the “browser process” that you embed in your application via the Content API, and //content/renderer, which represents the “renderer process” that internally runs the web engine’s logic, that is, Blink.

With this in mind, the initial version of the Blink Onion Soup project (aka “Onion Soup 1.0”) project was born about 4 years ago and the folks spearheading this proposal started working on a 3-way plan to implement their vision, which can be summarized as follows:

  1. Migrate usage of Chromium’s legacy IPC to the new IPC mechanism called Mojo.
  2. Move as much functionality as possible from //content/renderer down into Blink itself.
  3. Slim down Blink’s public APIs by removing classes/enums unused outside of Blink.

Three clear steps, but definitely not easy ones as you can imagine. First of all, if we were to remove levels of indirection between //content/renderer and Blink as well as to slim down Blink’s public APIs as much as possible, a precondition for that would be to allow direct communication between the browser process and Blink itself, right?

In other words, if you need your browser process to communicate with Blink for some specific purpose (e.g. reacting in a visual way to a Push Notification), it would certainly be sub-optimal to have something like this:

…and yet that is what would happen if we kept using Chromium’s legacy IPC which, unlike Mojo, doesn’t allow us to communicate with Blink directly from //content/browser, meaning that we’d need to go first through //content/renderer and then navigate through different layers to move between there and Blink itself.

In contrast, using Mojo would allow us to have Blink implement those remote services internally and then publicly declare the relevant Mojo interfaces so that other processes can interact with them without going through extra layers. Thus, doing that kind of migration would ultimately allow us to end up with something like this:

…which looks nicer indeed, since now it is possible to communicate directly with Blink, where the remote service would be implemented (either in its core or in a module). Besides, it would no longer be necessary to consume Blink’s public API from //content/renderer, nor the other way around, enabling us to remove some code.

However, we can’t simply ignore some stuff that lives in //content/renderer implementing part of the original logic so, before we can get to the lovely simplification shown above, we would likely need to move some logic from //content/renderer right into Blink, which is what the second bullet point of the list above is about. Unfortunately, this is not always possible but, whenever it is an option, the job here would be to figure out what of that logic in //content/renderer is really needed and then figure out how to move it into Blink, likely removing some code along the way.

This particular step is what we commonly call “Onion Soup’ing //content/renderer/<feature>(not entirely sure “Onion Soup” is a verb in English, though…) and this is for instance how things looked before (left) and after (right) Onion Souping a feature I worked on myself: Chromium’s implementation of the Push API:


Onion Soup’ing //content/renderer/push_messaging

Note how the whole design got quite simplified moving from the left to the right side? Well, that’s because some abstract classes declared in Blink’s public API and implemented in //content/renderer (e.g. WebPushProvider, WebPushMessagingClient) are no longer needed now that those implementations got moved into Blink (i.e. PushProvider and PushMessagingClient), meaning that we can now finally remove them.

Of course, there were also cases where we found some public APIs in Blink that were not used anywhere, as well as cases where they were only being used inside of Blink itself, perhaps because nobody noticed when that happened at some point in the past due to some other refactoring. In those cases the task was easier, as we would just remove them from the public API, if completely unused, or move them into Blink if still needed there, so that they are no longer exposed to a content module that no longer cares about that.

Now, trying to provide a high-level overview of what our team “Onion Soup’ed” this year, I think I can say with confidence that we migrated (or helped migrate) more than 10 different modules like the one I mentioned above, such as android/, appcache/, media/stream/, media/webrtc, push_messaging/ and webdatabase/, among others. You can see the full list with all the modules migrated during the lifetime of this project in the spreadsheet tracking the Onion Soup efforts.

In my particular case, I “Onion Soup’ed” the PushMessagingWebDatabase and SurroundingText features, which was a fairly complete exercise as it involved working on all the 3 bullet points: migrating to Mojo, moving logic from //content/renderer to Blink and removing unused classes from Blink’s public API.

And as for slimming down Blink’s public API, I can tell that we helped get to a point where more than 125 classes/enums were removed from that Blink’s public APIs, simplifying and reducing the Chromium code- base along the way, as you can check in this other spreadsheet that tracked that particular piece of work.

But we’re not done yet! While overall progress for the Onion Soup 1.0 project is around 90% right now, there are still a few more modules that require “Onion Soup’ing”, among which we’ll be tackling media/ (already WIP) and accessibility/ (starting in 2020), so there’s quite some more work to be done on that regard.

Also, there is a newer design document for the so-called Onion Soup 2.0 project that contains some tasks that we have been already working on for a while, such as “Finish Onion Soup 1.0”, “Slim down Blink public APIs”, “Switch Mojo to new syntax” and “Convert legacy IPC in //content to Mojo”, so definitely not done yet. Good news here, though: some of those tasks are already quite advanced already, and in the particular case of the migration to the new Mojo syntax it’s nearly done by now, which is precisely what I’m talking about next…

Migration to the new Mojo APIs and the BrowserInterfaceBroker

Along with working on “Onion Soup’ing” some features, a big chunk of my time this year went also into this other task from the Onion Soup 2.0 project, where I was lucky enough again not to be alone, but accompanied by several of my team mates from Igalia‘s Chromium team.

This was a massive task where we worked hard to migrate all of Chromium’s codebase to the new Mojo APIs that were introduced a few months back, with the idea of getting Blink updated first and then having everything else migrated by the end of the year.


Progress of migrations to the new Mojo syntax: June 1st – Dec 23rd, 2019

But first things first: you might be wondering what was wrong with the “old” Mojo APIs since, after all, Mojo is the new thing we were migrating to from Chromium’s legacy API, right?

Well, as it turns out, the previous APIs had a few problems that were causing some confusion due to not providing the most intuitive type names (e.g. what is an InterfacePtrInfo anyway?), as well as being quite error-prone since the old types were not as strict as the new ones enforcing certain conditions that should not happen (e.g. trying to bind an already-bound endpoint shouldn’t be allowed). In the Mojo Bindings Conversion Cheatsheet you can find an exhaustive list of cases that needed to be considered, in case you want to know more details about these type of migrations.

Now, as a consequence of this additional complexity, the task wouldn’t be as simple as a “search & replace” operation because, while moving from old to new code, it would often be necessary to fix situations where the old code was working fine just because it was relying on some constraints not being checked. And if you top that up with the fact that there were, literally, thousands of lines in the Chromium codebase using the old types, then you’ll see why this was a massive task to take on.

Fortunately, after a few months of hard work done by our Chromium team, we can proudly say that we have nearly finished this task, which involved more than 1100 patches landed upstream after combining the patches that migrated the types inside Blink (see bug 978694) with those that tackled the rest of the Chromium repository (see bug 955171).

And by “nearly finished” I mean an overall progress of 99.21% according to the Migration to new mojo types spreadsheet where we track this effort, where Blink and //content have been fully migrated, and all the other directories, aggregated together, are at 98.64%, not bad!

On this regard, I’ve been also sending a bi-weekly status report mail to the chromium-mojo and platform-architecture-dev mailing lists for a while (see the latest report here), so make sure to subscribe there if you’re interested, even though those reports might not last much longer!

Now, back with our feet on the ground, the main roadblock at the moment preventing us from reaching 100% is //components/arc, whose migration needs to be agreed with the folks maintaining a copy of Chromium’s ARC mojo files for Android and ChromeOS. This is currently under discussion (see chromium-mojo ML and bug 1035484) and so I’m confident it will be something we’ll hopefully be able to achieve early next year.

Finally, and still related to this Mojo migrations, my colleague Shin and I took a “little detour” while working on this migration and focused for a while in the more specific task of migrating uses of Chromium’s InterfaceProvider to the new BrowserInterfaceBroker class. And while this was not a task as massive as the other migration, it was also very important because, besides fixing some problems inherent to the old InterfaceProvider API, it also blocked the migration to the new mojo types as InterfaceProvider did usually rely on the old types!


Architecture of the BrowserInterfaceBroker

Good news here as well, though: after having the two of us working on this task for a few weeks, we can proudly say that, today, we have finished all the 132 migrations that were needed and are now in the process of doing some after-the-job cleanup operations that will remove even more code from the repository! \o/

Attendance to conferences

This year was particularly busy for me in terms of conferences, as I did travel to a few events both as an attendee and a speaker. So, here’s a summary about that as well:

As usual, I started the year attending one of my favourite conferences of the year by going to FOSDEM 2019 in Brussels. And even though I didn’t have any talk to present in there, I did enjoy my visit like every year I go there. Being able to meet so many people and being able to attend such an impressive amount of interesting talks over the weekend while having some beers and chocolate is always great!

Next stop was Toronto, Canada, where I attended BlinkOn 10 on April 9th & 10th. I was honoured to have a chance to present a summary of the contributions that Igalia made to the Chromium Open Source project in the 12 months before the event, which was a rewarding experience but also quite an intense one, because it was a lightning talk and I had to go through all the ~10 slides in a bit under 3 minutes! Slides are here and there is also a video of the talk, in case you want to check how crazy that was.

Took a bit of a rest from conferences over the summer and then attended, also as usual, the Web Engines Hackfest that we at Igalia have been organising every single year since 2009. Didn’t have a presentation this time, but still it was a blast to attend it once again as an Igalian and celebrate the hackfest’s 10th anniversary sharing knowledge and experiences with the people who attended this year’s edition.

Finally, I attended two conferences in the Bay Area by mid November: first one was the Chrome Dev Summit 2019 in San Francisco on Nov 11-12, and the second one was BlinkOn 11 in Sunnyvale on Nov 14-15. It was my first time at the Chrome Dev Summit and I have to say I was fairly impressed by the event, how it was organised and the quality of the talks in there. It was also great for me, as a browsers developer, to see first hand what are the things web developers are more & less excited about, what’s coming next… and to get to meet people I would have never had a chance to meet in other events.

As for BlinkOn 11, I presented a 30 min talk about our work on the Onion Soup project, the Mojo migrations and improving Chromium’s code health in general, along with my colleague Antonio Gomes. It was basically a “extended” version of this post where we went not only through the tasks I was personally involved with, but also talked about other tasks that other members of our team worked on during this year, which include way many other things! Feel free to check out the slides here, as well as the video of the talk.

Wrapping Up

As you might have guessed, 2019 has been a pretty exciting and busy year for me work-wise, but the most interesting bit in my opinion is that what I mentioned here was just the tip of the iceberg… many other things happened in the personal side of things, starting with the fact that this was the year that we consolidated our return to Spain after 6 years living abroad, for instance.

Also, and getting back to work-related stuff here again, this year I also became accepted back at Igalia‘s Assembly after having re-joined this amazing company back in September 2018 after a 6-year “gap” living and working in the UK which, besides being something I was very excited and happy about, also brought some more responsibilities onto my plate, as it’s natural.

Last, I can’t finish this post without being explicitly grateful for all the people I got to interact with during this year, both at work and outside, which made my life easier and nicer at so many different levels. To all of you,  cheers!

And to everyone else reading this… happy holidays and happy new year in advance!

08 de December de 2019

«Chris stared out of the window», a theological tale

My English class. You should write an story starting with 🙶Chris stared out of the window waiting for the phone to ring🙷. Let’s do it.


Chris stared out of the window waiting for the phone to ring. He looked into the void while his mind wandered. Time is passing, but it is not. For an eternal being all time is present, but not always is time present. The past, the future are just states of mind for an overlord. But he is still waiting for the phone to ring. Time is coming. The decision was made. It had always been made, before time existed indeed. Chris knows all the details of the plan. He knows because he is God too. He knows because he conceived it. No matter if he had been waiting for the starting signal. No matter if he expects the Holly Spirit to announce it to him. You can can call it protocol. He knows because he had decided how to do it. But Chris doubts. He is God. He is Holly Spirit. But he has been human too. The remembrance of his humanity brings him to a controversial state of mind. Now he doubts. He has been always doubting since the world is the world and before the existence of time. And after too, because he is an eternal being. He now relives the feelings of being a human. He relives all the feelings to be all the humans. He revisits joy and pain. Joy is good, sure. But joy is is nothing special for an overlord god. But pain… pain matters for a human — and Chris has been a human. Chris knows. Chris feels. Chris understands how sad human life can be. Chris knows because he is the Father creator. He created humans. He knows how mediocrity drives the character of all the creatures privileged with consciousness. A poisoned gift. He knows how evil is always just an expression of insecurities, the lack of certainty of what will happen tomorrow. What will happen just the next second. 🙶Will I be alive? Will be the pain still exists?🙷. He knows because he feels. And he doubts because he feels. He feels it can’t be fair to judge, to condemn, to punish a living being because it was created in that way. This would not be the full of love god the Evangelists announced. How could he punish for a sin he was the cause of. But, if not, can it be fair to all the others who behave according the Word. All of those, maybe with love, maybe will pain, maybe just for selfishness, fulfilled God’s proposal of virtue and goodness. How can not distinguish their works, to award their efforts. How can it be fair. How can he be good. His is the power and the glory. He is in doubt. The phone rings.

20 de November de 2019

Some GNOME / LAS / Wikimedia love

For some time to know I’ve dedicating some more time to Wikimedia related activities. I love to share this time with other opensource communities I’m related to. This post is just to write down a list of items/resources I’ve created related with events in this domain.

Wikidata

If you don’t know about Wikidata probably you’ll look at it because it’ll be the most important linked data corpora in the world. In the future we will use WD as the cornerstone of many applications. Remember you read this here first.

About GUADEC:

About GNOME.Asia:

About LAS 2019:

And about the previous LAS format:

Wikimedia Commons

Wikimedia Commons is my current favorite place to publish pictures with open licensing these days. To me is the ideal place to publish reusable resources with explicit Creative Commons open licensing. And you can contribute with your own media without intermediaries.

About GUADEC:

About GNOME.Asia:

About LAS:

Epilogue

As you can check the list is not complete neither all items are fully described. I invite you to complete all the information you want. For Wikidata there are many places where ask help. And for WikiCommons you can help uploading your own pictures. If you have doubts just use current examples as references or ask me directly.

Linux Applications Summit 2019 activity

Here it is a summary about my activities at the past Linux App Summit this month in Barcelona.

My first goal has been spreading the word about the Indico conference management system. This was in part done with a lightning talk. Sadly I couldn’t show my slides but here they are for your convenience. Anyhow they are not particularly relevant.

Also @KristiProgri asked me to help taking pictures of the event. I’m probably the worst photographer in the world and I don’t have special interest in photography as a hobby but for some months I’m trying to take documentary pictures supposedly relevant for the Wikimedia projects. The good thing is seems I’m getting better, specially since I changed to a new smartphone with it’s making magic with my pictures. Y just use a mere Moto G7+ smartphone but it’s making me really happy with the results, exceeding any of my expectations. Just to say I found the standard camera application doesn’t work well for me when photographing moving targets but I’m doing better indeed with the wonderful Open Camera Android opensource application.

I uploaded my pictures to Category:Linux App Summit 2019. Please consider to add yours to the same category.

Related with this I added items to Wikidata too.

Also helped a bit sharing pics in Twitter #LinuxAppSummit:

And finally, I helped the local team with some minor tasks like moving items and so.

I want to congratulate all the organization team and specially the local team for the results and the love they have put in the event. The results have been excellent and this is another strong step for the interweaved relations between opensource development communities sharing very near goals.

My participation at the conference has been sponsored by the GNOME Foundation. Thanks very much for their support.

05 de November de 2019

Congress/Conference organization tasks list draft

Well, when checking my blog looking for references about resources related with conferences organization I’ve found I had any link to this thing I compiled two years ago (!!??). So this post is fixing it.

After organizing a couple national and international conferences I compiled a set of tasks useful as an skeleton for you next conference. The list is not absolutely exhaustive neither strictly formal but it’s complete enough to be, I think, accurate and useful. In its current this task list is published at tree.taiga.io as Congress/Conference organization draft: «a simplified skeleton of a kanban project for the organization of conferences. It’s is specialized in technical and opensource activities based in real experience».

I think the resource is still valid and useful. So feel free to use it and provide feedback.

task list screenshot

Now, thinking aloud, and considering my crush with EPF Composer I seriously think I should model the tasks with it as an SPEM method and publish both sources and website. And, hopefully, create tools for creating project drafts in well known tools (Gitlab, Taiga itself, etc). Reach me if you are interested too :-)

Enjoy!

24 de October de 2019

VCR to WebM with GStreamer and hardware encoding

My family had bought many years ago a Panasonic VHS video camera and we had recorded quite a lot of things, holidays, some local shows, etc. I even got paid 5000 pesetas (30€ more than 20 years ago) a couple of times to record weddings in a amateur way. Since my father passed less than a year ago I have wanted to convert those VHS tapes into something that can survive better technologically speaking.

For the job I bought a USB 2.0 dongle and connected it to a VHS VCR through a SCART to RCA cable.

The dongle creates a V4L2 device for video and is detected by Pulseaudio for audio. As I want to see what I am converting live I need to tee both audio and video to the corresponding sinks and the other part would go to to the encoders, muxer and filesink. The command line for that would be:

gst-launch-1.0 matroskamux name=mux ! filesink location=/tmp/test.webm \
v4l2src device=/dev/video2 norm=255 io-mode=mmap ! queue ! vaapipostproc ! tee name=video_t ! \
queue ! vaapivp9enc rate-control=4 bitrate=1536 ! mux.video_0 \
video_t. ! queue ! xvimagesink \
pulsesrc device=alsa_input.usb-MACROSIL_AV_TO_USB2.0-02.analog-stereo ! 'audio/x-raw,rate=48000,channels=2' ! tee name=audio_t ! \
queue ! pulsesink \
audio_t. ! queue ! vorbisenc ! mux.audio_0

As you can see I convert to WebM with VP9 and Vorbis. Something interesting can be passing norm=255 to the v4l2src element so it’s capturing PAL and the rate-control=4 for VBR to the vaapivp9enc element, otherwise it will use cqp as default and file size would end up being huge.

You can see the pipeline, which is beatiful, here:

As you can see, we’re using vaapivp9enc here which is hardware enabled and having this pipeline running in my computer was consuming more or less 20% of CPU with the CPU absolutely relaxed, leaving me the necessary computing power for my daily work. This would not be possible without GStreamer and GStreamer VAAPI plugins, which is what happens with other solutions whose instructions you can find online.

If for some reason you can’t find vaapivp9enc in Debian, you should know there are a couple of packages for the intel drivers and that the one you should install is intel-media-va-driver. Thanks go to my colleague at Igalia Víctor Jáquez, who maintains gstreamer-vaapi and helped me solving this problem.

My workflow for this was converting all tapes into WebM and then cutting them in the different relevant pieces with PiTiVi running GStreamer Editing Services both co-maintained by my colleague at Igalia, Thibault Saunier.

22 de October de 2019

Testing Indico opensource event management software

Indico event management tool

After organizing a bunch of conferences in the past years I found some communities had problems choosing a conference management software. One alternative or others had some limitations in one way or another. In the middle I collected a list of opensource alternatives and recently I’m very interested in Indico. This project is created and maintained by the CERN (yes, those guys who invented the WWW too).

The most interesting reasons for me are:

Jornadas WMES 2019

With the help of Franc Rodríguez we set up an Indico testing instance at https://indico.olea.org. This system is ready to be broken so feel free to experiment.

So this post is an invitation to any opensource community wanting to test the feasiability of Indico for their future events. Please consider to give it an opportunity.

Here are some items I consider relevant for you:

And some potential enhancements (not fully check if currently available or not):

  • videoconf alternatives: https://meet.jit.si
  • social networks integration
    • Twitter
    • Mastodon
    • Matrix
  • exports formats
    • pentabarf
    • xcal, etc
  • full GDPR compliance (seems it just need to add the relevant information to your instance)
  • gravatar support
  • integration with SSO used by the respective community (to be honest I didn’t checked the Flask-Multipass features)
  • maybe a easier inviting procedure: sending inviting links to an email for full setup;
  • map integration (OSM and others).

For your tests you’ll need to register at the site and contact me (look at the botton of this page) to add you as a manager of your community.

I think it would be awesome for many communities sharing a common software product. Isn’t it?

PD: Great news, next March CERN will host an Indico meeting!
PPD: Here you can check a full configured event organized by the Libre Space Foundation people: Open Source CubeSat Workshop 2019.
PPPD: And now I got your attention check our Congress/Conference organization tasks list. It’s free!

17 de October de 2019

Gnome-shell Hackfest 2019 – Day 3

As promised, some late notes on the 3rd and last day of the gnome-shell hackfest, so yesterday!

Some highlights from my partial view:

  • We had a mind blowing in depth discussion about the per-crtc frame clocks idea that’s been floating around for a while. What started as “light” before-bedtime conversation the previous night continued the day after straining our neurons in front of a whiteboard. We came out wiser nonetheless, and have a much more concrete idea about how should it work.
  • Georges updated his merge request to replace Cogl structs with graphene ones. This now passes CI and was merged \o/
  • Much patch review happened in place, and some other pretty notable refactors and cleanups were merged.
  • The evening was more rushed than usual, with some people leaving already. The general feeling seemed good!
  • In my personal opinion the outcome was pretty good too. There’s been progress at multiple levels and new ideas sparked, you should look forward to posts from others :). It was also great to put a face to some IRC nicks, and meet again all the familiar ones.

Kudos to the RevSpace members and especially Hans, without them this hackfest couldn’t have happened.

16 de October de 2019

Gnome-shell Hackfest 2019 – Day 2

Well, we are starting the 3rd and last day of this hackfest… I’ll write about yesterday, which probably means tomorrow I’ll blog about today :).

Some highlights of what I was able to participate/witness:

  • Roman Gilg of KDE fame came to the hackfest, it was a nice opportunity to discuss mixed DPI densities for X11/Xwayland clients. We first thought about having one server per pixel density, but later on we realized we might not be that far from actually isolating all X11 clients from each other, so why stop there.
  • The conversation drifted into other topics relevant to desktop interoperation. We did discuss about window activation and focus stealing prevention, this is a topic “fixed” in Gnome but in a private protocol. I had already a protocol draft around which was sent today to wayland-devel ML.
  • A plan was devised for what is left of Xwayland-on-demand, and an implementation is in progress.
  • The designers have been doing some exploration and research on how we interact with windows, the overview and the applications menu, and thinking about alternatives. At the end of the day they’ve demoed to us the direction they think we should take.

    I am very much not a designer and I don’t want to spoil their fine work here, so stay tuned for updates from them :).

  • As the social event, we had a very nice BBQ with some hackerspace members. Again kindly organized by Revspace.

14 de October de 2019

Gnome-shell Hackfest 2019 – Day 1

So today kickstarted the gnome-shell hackfest in Leidschendam, the Netherlands.

There’s a decent number of attendants from multiple parties (Red Hat, Canonical, Endless, Purism, …). We all brought various items and future plans for discussion, and have a number of merge requests in various states to go through. Some exciting keywords are Graphene, YUV, mixed DPI, Xwayland-on-demand, …

But that is not all! Our finest designers also got together here, and I overheard they are discussing usability of the lock screen between other topics.

This event wouldn’t have been possible without the Revspace hackerspace people and specially our host Hans de Goede. They kindly provided the venue and necessary material, I am deeply thankful for that.

As there are various discussions going on simultaneously it’s kind of hard to keep track of everything, but I’ll do my best to report back over this blog. Stay tuned!

13 de October de 2019

Jornadas Wikimedia España WMES 2019: Wikitatón de patrimonio inmueble histórico de Andalucía

Jornadas WMES 2019

En la última entrada ya mencioné que dirigiré un taller sobre edición con Wikidata en las Jornadas Wikimedia España 2019. Aquí presento los enlaces y referencias que usaremos en el taller. Nos centramos en el caso del patrimonio histórico inmueble andaluz porque llevo un tiempo trabajando con él y estoy familiarizado, pero es extrapolable a cualquier otro ámbito semejante.

Quiero animar a cualquier interesado a participar sin importar tu experiencia con Wikidata. Creo que merecerá la pena. Lo que sí os ruego, por favor, es que todos traigáis ordenador portátil.

Referencias oficiales

Principales servicios Wikimedia de nuestro interés

Material relacionado en los proyectos Wikimedia:

Consultas SPARQL a Wikidata relacionadas:

Otros servicios externos de interés:

Ejemplos de monumentos

Usaremos unos ejemplos como material de referencia. Es muy relevante el de la Alhambra porque es la entrada de la guía de Ándalucía con más datos de todo el catálogo, con mucha ventaja.

Alhambra de Granada

Puente del Hacho

Estación de Renfe de Almería

Agradecimientos

Jornadas WMES 2019

Mi asistencia a las jornadas ha sido posible gracias al soporte económico de la asociación Wikimedia España. Desde aquí mi agradecimiento.

10 de October de 2019

Next conferences

Just to say I’m going to a pair of conferences here in Spain:

At WMES 2019 I will lead a Wikidata workshop about adding historical heritage data, basically repeating the one at esLibre.

At LAS 2019 I plan to attend to the Flatpak workshops and to call for a BoF for people involved in opensource conference organizations to share experiences and reuse tools.

Lots of thanks for the Wikimedia España association and GNOME Foundation for their travel sponsorship. Without their help I could not attend both.

See you in Pamplona and Barcelona.

01 de October de 2019

A new time and life next steps

the opensource symbol

Since the beginning of my career in 1998 I’ve been related with Linux and opensource in me or other way. From sysadmin I grow to distro making, hardware certification and finally consulting, plus some other added skills. Parallel I developed a personal career in libre software communities and got the privilege to give lots of talks particularly in Spain and Ibero-America. That was a big time. All this stopped in 2011 with the combination of the big economic crisis in Spain and a personal psychological situation. All lead me to go back from Madrid to my home city, Almería, to look for health recovering. Now, after several years here I’m ready to take a new step and reboot my career.

Not all this time has been wasted. I dedicated lots of hours to a new project which in several senses has been the inverse of the typical practices in opensource communities. Indeed, I’ve tried to apply most of them but instead in the world-wide Internet now with a 100% hyper-local focus. This mean working in the context of a medium-small city (less than 200k inhabitants) with intensive in-person meetings and Internet communications support. Not all the results has been as successful as I pretended, probably because I kept very big expectations; as Antonio Gramsci said «I’m a pessimist because of intelligence, but an optimist because of will» :-) The effort was developed in what we named HackLab Almería and some time ago I wrote a recap about my experience. To me was both an experiment and a recovering therapy.

That time worked to recover ambitions, a la Gramsci, and to bring relevant important and itinerant events to our nice city, always related with opensource. Retaking the experience of the good-old HispaLinux conferences we were able of hosting a set of extraordinary great technological conferences: from PyConES 2016 to Akademy 2017, GUADEC 2018 and LibreOffice Conference 2019. For some time I thought Almería was the first city to host these three… after I realized Brno did it before! The icing of the cake was the first conference on secure programming in Spain: SuperSEC. I consider all of this a great personal success.

Forgot to mention I enrolled in a university course too, more as a excuse to work in an area for which I have never found time: information and software methodology modeling. This materializes in my degree project, in advanced development state but not yet finished, around the ISO/IEC 29110 norm and the EPF Composer. I’m giving it a final push in the coming months.

Now I’m closing this stage to start a new one, with different priorities and goals. First one is to reboot my professional career, so I’m looking for a new job and started a B2 English certification course. I’m resuming my participation in opensource communities —I’ll attend LAS 2019 next November— and hope to contribute with small but not trivial collaborations to several communities. After all I think the most I’ve been doing all these years has been just shepherding the digital commons.

See you in your recruitment process! ;-)

PS: this is an Spanish version of this post.

30 de September de 2019

Nueva etapa: cambio de época y futuro profesional

Cambiamos de tercio a más de lo mismo pero de otra manera

the opensource symbol

Desde que empecé mi carrera profesional en 1998 casi siempre he estado relacionado con el mundo Linux y el software libre. Si bien no ha sido demasiado brillante tampoco me arrepiento tanto, como se suele decir, de lo que he hecho que de lo que no he hecho y de las oportunidades que hubiera podido explotar. Pero todo cambió en 2011 cuando combinaron la crisis económica española, problemas laborales y, sobre todo, personales que me obligaron a volver desde Madrid a mi ciudad de origen, Almería, para recuperarme. Ha tomado tiempo pero parece que lo hemos conseguido. Fue en esta época cuando surgió lo que acabamos denominando HackLab Almería.

Personalmente la actividad en el HLA fue un experimento para aplicar el bagaje adquirido en conocimientos y prácticas en comunidades abiertas opensource durante más de 10 años pero, en este caso, con un enfoque totalmente inverso: de comunidades principalmente telemáticas con alcance incluso mundial a la orientación radicalmente _hiperlocal_con obligado e intenso ámbito presencial. En aquel momento tenía mucho tiempo disponible y me volqué en crear contenido, identificar y establecer contactos personales y dinamizar una nueva comunidad que pudiera alcanzar inercia y masa crítica autosostenible. También fue en aquella época que en una puntual visita a Madrid —por entonces mi actividad viajera se había reducido a casi cero— tras una motivadora conversación con ALMO empecé a recuperar ilusión perdida y afán de creación que finalmente cristalizaron en una actividad intensa durante meses que sin ataduras profesionales o económicas también sirvió para recuperar habilidades y cultivar otras nuevas, profundizando en un proyecto alineado con mi experiencia y lo suficientemente interesante para mantener permanentemente mi interés. Tiempo útil como terapia para recuperar autoestima, paz de espíritu y rendimiento intelectual.

De camino aproveché para reforzar mi prácticas de la ética hacker: durante años he sido un gran diletante con, tal vez, muchas cosas que decir pero con muy poco impacto. Y esa no es una bonita sensación para un narcisista. Así pues decidí esforzarme en hablar menos y hacer más. Del grado de consecución se podría hablar aparte en otro momento aunque en su momento redacté una retrospectiva. También dediqué interés a profundizar en el conocimiento abierto y los procomunes digitales: MusicBrainz, Wiki Commons, Wikidata, OpenStreetMap, etc.

Por entonces y prácticamente por casualidad se dibujó la oportunidad de traer encuentros tecnológicos importantes —de una manera u otra siempre relacionados con el opensource— a esta mi ciudad, periférica en la periferia y, tal vez, la única isla de España sita en la propia península Ibérica. Si ya tenía experiencia previa promoviendo y colaborando en aquellos congresos HispaLinux el trabajo en PyConES 2016 —gracias Juanlu por la confiaza— fue un salto cualitativo que después se materializó en la celebración de Akademy 2017, GUADEC 2018 y LibreOffice Conference 2019 en Almería. Por algún tiempo pensé que la nuestra sería la primera ciudad en conseguir este triplete… hasta que descubrí que Brno se nos había adelantado :-) Por el camino también inventamos SuperSEC, el primer congreso nacional de programación segura en España.

Ahora doy por finalizada esta etapa en parte bastante frustrado. No estoy satisfecho con todos los resultados, en particular con el impacto local. Mientras preparaba este artículo había pensado entrar en algunos detalles descriptivos pero… ¿para qué? Quien podría haberse interesado no lo hizo en su momento y a mi aún me dolería entrar en retrospectiva y… finalmente ¿para qué? para ser otro lapso desvanecido en la entropía. Sí que me quedo tranquilo de connciencia porque sé que, mejor o peor, me entregué al máximo.

Así pues: cambio de tercio. Un 1 de octubre no es mala fecha para hacerlo. Vuelvo a volcarme en desarrollar mi perfil profesional y, atención querido público, busco trabajo. Obviamente cuanto más próximo y relacionado con el mundillo del software libre y anejos mucho mejor. Y es que aún queda muchísimo por hacer para construir la infraestructura digital libre necesaria para una sociedad digital abierta y quiero seguir siendo parte. Al fin y al cabo creo que todo lo que he hecho desde los años 90 ha sido pastorear los procomunes digitales.

Nos vemos en vuestro proceso de reclutamiento ;-)

PS: esta es la versión en inglés de este artículo.

23 de September de 2019

LibreOffice Conference 2019 by numbers

LibreOffice Conference 2019 badge

LibreOffice Conference 2019 ended and… seems people really enjoyed!

Here I provide some metrics about the conference. Hopefully they’ll be useful for next years.

  • Attendees:
    • 114 registered at website before Aug 31 deadline;
    • 122 total registered at the end of the conference;
    • 102 total of phisically registered at the conference.
  • Registered countries of origin: Albania, Austria, Belgium, Bolivia, Brazil, Canada, Czech Republic, Finland, France, Germany, Hungary, India, Ireland, Italy, Japan, Korea - Republic of, Luxembourg, Poland, Portugal, Romania, Russian Federation, Slovenia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey and United KingdomM
  • 4 days: 1 for board and community meetings and 3 for conference talks;
  • 3 tracks;
  • 68 talks, 6 GSoC presentations and 13 lightning talks;
  • 1 new individual certification;
  • 4 social events:
    • welcome party, 70 participants;
    • beach dinner party, 80 participants;
    • teathrical visit to the Alcazaba castle, 50 participants;
    • after conference city visit, 14 participants;
  • 1 hackfest, approximately 50 participants;
  • 1 conference shuttle service bus (capacity for more than 120 persons);
  • Telegram communications:
  • Conference pictures, at least:
  • Weather: two completely unexpected rainy days in Almería o_0
  • About economics, the conference ended with some superavit, which is nice. Thanks a lot to our sponsors for making this possible.

Next are a list of data tables with other more information.

Meals at university cafeteria:

    Sept. 10   Sept. 11   Sept. 12   Sept 13   Total
meals: expected 70 106 106 107 389
meals: served 54 92 97 86 329


T-shirts, ordered to our friends of FreeWear:

type   size (EU)   number
unisex S 9
unisex M 24
unisex L 36
unisex XL 15
unisex XXL 15
unisex XXXL 7
unisex - tight S 1
unisex - tight M 4
unisex - tight L 2
  total 113


The LibOCon overnight stays at Civitas were:

day   number
2019/09/05 1
2019/09/06 1
2019/09/07 5
2019/09/08 32
2019/09/09 57
2019/09/10 75
2019/09/11 77
2019/09/12 77
2019/09/13 64
2019/09/14 13
2019/09/15 3
2019/09/16 3
total overnights: 408


Twitter campaign activity at @LibOCon:

Month  tweets   impressions   profile visits   mentions   new followers
Apr 2 2321 228 9 10
May 6 8945 301 6 19
Jun 3 3063 97 3 5
Jul 3 5355 188 3 13
Aug 10 8388 208 10 2
Sept 75 51200 1246 158 (not available)
totals: 99 79272 2268 189 49



PS: I’m amazed I’ve not blogged almost nothing about the conference until now!!
PD: Added the overnight numbers at the conference hotel.

30 de July de 2019

HackIt 2019, level 3³

Creo que esta prueba nos llevó más del 50% del tiempo del HackIt de este año :-O , pero es el tipo de prueba que nos encanta: sabes lo que hay que hacer, pero es un camino tortuoso, doloroso y complejo. A por ello 🙂

El título de la prueba siempre lleva alguna pista a modo de juego de palabras. Ese cubo en forma de superíndice…

Analizamos el dump y vemos que se trata de un pcap. Lo abrimos con Wireshark y curioseamos un rato.

No puede faltar una prueba con Wireshark en un HackIt que se precie 🙂

Ese puerto tcp/25565 se nos hace conocido…

También se podía deducir que era una captura del protocolo de Minecraft mirando los strings. Aparece algo como «generic.movementSpeed?». Buscándolo en Google nos lleva a Minecraft, sin duda.

Yep, Minecraft. En el servidor 51.15.21.7. Aquí otra vez fuimos troleados por @imobilis… o tal vez se trataba de un easter-egg en la prueba 🙂 El caso es que ese servidor existe (!) y tiene un mundo en el que apareces encima de una torre de la que no es posible salir. Incluso tiene mensajes en algunos carteles (por supuesto los probamos todos, sin éxito), como el de la imagen (Mundo Survival Kots)

Anda que no estuvimos tiempo «jugando» en esta torre. Los mensajes son pistas falsas.

El dump tiene mensajes enviados del cliente (10.11.12.52) al servidor (51.15.21.7) y viceversa. El payload de los mensajes es (parecía!) claro y se puede extraer con tshark.

$ tshark -r dump -T fields -e data

1b0010408d2e07aeae7d91401400000000000040855ae9b632828401
12004a0000000059c86aa10000000000001ac9
0a0021000000028daf8dbd
0a000e000000028daf8dbd

Aquí nos las prometíamos muy felices, porque vimos que había analizadores del protocolo Minecraft para Wireshark, como este o este. Todo muy de color rosa… hasta que nos fijamos en la fecha del último commit: 2010. Qué bien… no nos valen para nada. Así que, nos remangamos, fuimos a por café, y nos pusimos a estudiar la especificación del protocolo Minecraft, que está escrito por alguien que parece que tomaba apuntes de una charla, más que una especificación bien redactada. Hay exactamente 0 ejemplos de las partes más engorrosas (VarInt, packets with compression, …) En fin, nuestro compañero Joserra, un Excel wizard, decidió que nuestros scripts eran una **** mierda y que lo iba a hacer en Excel ¯_(ツ)_/¯

Si tomamos el primer payload, 001b es el tamaño del paquete (27 bytes), 0x10 el packetID y 408d2e07aeae7d91401400000000000040855ae9b632828401 el payload del paquete. El 0x10 es el ID de un paquete de tipo «Player Position» (Bound to server indica que es el cliente el que le envía al servidor). El payload se divide en 4 campos: x (double), feet y (double), z (double), «on ground» (boolean). Todos los paquetes de posición (0x10, server bound) son impares, por lo que terminan en 1 (true, on ground). Nos interesa conocer x, y, z.

x= 408d 2e07 aeae 7d91
y = 4014 0000 0000 0000
z = 4085 5ae9 b632 8284

Para pasar de hex a double, invocamos una macro, hex2dbl

No es la primera vez que resolvemos una prueba con Excel 🙂

y obtenemos las posiciones x,y,z.

Finalmente, generamos un gráfico de dispersión y obtenemos la clave 🙂

@imobilis tuvo que pasarse horas para conseguir mover el jugador de Minecraft por el mapa hasta conseguir trazar el texto. Si nos fijamos siempre empieza de un punto, baja y vuelve a subir a ese punto para trazar la siguiente letra. Analizando el payload, la altura de esa zona superior es distinta a la altura de donde dibuja las letras. Probablemente. en el juego tenía una especie de escalón que le marcaba la zona «segura» (donde se podía desplazar hacia la derecha, para pintar la siguiente letra). ¡Menudo curro!

Atentos a las mayúsculas, minúsculas, 0 vs. O, 1 vs. I, etc… Fue la troleada final a una buena prueba 🙂

BLoCkD3f1nEdPrOt0coL

UPDATE: @navarparty (los primeros en lograr superar este reto) ha publicado su solución (en Go!). Thanks @tatai!
También recomiendo leer el write-up de w0pr y su elegante solución en Python + pygame.

29 de July de 2019

HackIt! 2019, Level 2

Este level parece que se le atragantó a muchos grupos. Aunque estuvimos unas cuantas horas dándole vueltas, una vez resuelto te das cuenta de que, lo que lo hacía complejo, realmente eran varios red-herring o falsas pistas. Si las seguías, estabas muerto. El level empieza con 3 ficheros: yellow, red, green. Aquí está el primer anzuelo: ¿para qué estos colores?… En fin, sacando strings, el que más llama la atención es red.

Juanan-2:2 juanan$ strings -n 12 red|more
Ktablered1000red1000^LCREATE TABLE red1000(redz blob)N
ytablered100red100
CREATE TABLE red100(reda varchar(10),redb varchar(10))H
utablered9red9
CREATE TABLE red9(reda varchar(10),redb varchar(10))H
utablered8red8
CREATE TABLE red8(reda varchar(10),redb varchar(10))H
utablered7red7
CREATE TABLE red7(reda varchar(10),redb varchar(10))H
utablered6red6
CREATE TABLE red6(reda varchar(10),redb varchar(10))H
utablered5red5
CREATE TABLE red5(reda varchar(10),redb varchar(10))H
utablered4red4
...
CREATE TABLE red1(reda varchar(10),redb varchar(10))
0000000 5473 6572 6d34 3352 000a

Vaya… una base de datos, probablemente SQLite. Y el campo redz de la tabla red1000 es de tipo blob. Estuvimos dándole vueltas y vueltas a esto. Conseguimos incluso importar la estructura de las tablas.

En la tabla red1, la columna reda tiene algo:

Pero eso ya salía en los strings, no hacía falta liarse la manta con SQLite… Mmmh, veamos qué significa:

misterio = [0x54,0x73,0x65,0x72,0x6d,0x34,0x33,0x52,0x00,0x0a]
import binascii
print("".join( chr(c) for c in misterio))
Tserm43R

¿Tserm43R? WTF? @ochoto comentó en el grupo que tal vez habría que darle la vuelta a cada par de valores (big endian?) porque los últimos bytes son un salto de línea + fin del string invertidos (0x00, 0x0a). Vamos allá (quitando el salto de línea):

misterio = [0x54,0x73,0x65,0x72,0x6d,0x34,0x33,0x52]
"".join([chr(a)+chr(b) for a,b in [i for i in zip(misterio[1::2], misterio[::2])]])

'sTre4mR3'

Tiene sentido, parece un trozo de string en h4x0r. Dejémoslo ahí y vayamos a por green. Este fue más fácil:

$ binwalk -e green

DECIMAL       HEXADECIMAL     DESCRIPTION
--------------------------------------------------------------------------------
27337196      0x1A121EC       HPACK archive data
33554432      0x2000000       gzip compressed data, has original file name: "trololo", from Unix, last modified: 2019-07-15 23:29:50

$ ls -al _green.extracted/
total 8
drwxr-xr-x   3 juanan  wheel    96 Jul 25 21:28 .
drwxrwxrwt@ 70 root    wheel  2240 Jul 29 22:15 ..
-rw-r--r--   1 juanan  wheel     8 Jul 25 21:28 trololo

$ cat _green.extracted/trololo
ce1VEd!

Vaya, si concatenamos red con green (mismo orden que el enunciado), obtenemos ‘sTre4mR3ce1VEd!’. Tiene muy buena pinta. Sólo nos queda un fichero, yellow. Es un fichero binario, sin ningún magic number ni strings asociados. Tras muchas vueltas, se nos ocurrió algo evidente (a que sí, @navarparty? XDDD), abrirlo con Audacity:

Bingo, se oye a alguien deletreando, en inglés y a mucha velocidad, la parte que nos falta del password. Ajustando la velocidad y teniendo en cuenta que las zonas más oscuras de la señal reflejan mayúsculas, obtenemos R0tT3nB1t.

Así que… R0tT3nB1tsTre4mR3ce1VEd!

Nota: este post no refleja la dificultad de la prueba. No fue «tan fácil» como parece 🙂 Estuvimos muuuuuuuuuucho tiempo analizando los 3 binarios hasta encontrar la secuencia de pasos y herramientas adecuadas.

HackIt! 2019, level 1

Un año más, y van ya 20, asistimos a la Euskal Encounter con ganas de darlo todo, en especial al HackIt! y al CTF. En este HackIt! de la EE27 hemos sudado la gota gorda para pasar 3 pruebas de 6, logrando un segundo puesto, lo cual indica el nivel de dificultad. Eso sí, todas ellas han sido pensadas y muy curradas, por lo que lo primero, como siempre, es agradecer el trabajo de @imobilis y @marcan42. La verdad, sólo imaginar lo que costó implementar alguna de ellas (el level 3 de Minecraft en concreto pudo ser un dolor… o el 6, con la tira de leds y el LFSR en la imagen) hace que quiera invitarles a algo para que vuelvan el año que viene con nuevas ideas 🙂 En fin, entremos en harina, level1, Extreme Simplicity.

Abrimos el código fuente y vemos el siguiente trozo en JS:

function q(e){var a=">,------------------------------------------------------------------------------------[&lt;+>[-]],----------------------------------------------------[&lt;+>[-]],------------------------------------------------------------------------------------------------------------------[&lt;+>[-]],----------------------------------------------------------------------------------------------------------------[&lt;+>[-]],-------------------------------------------------[&lt;+>[-]],--------------------------------------------------------------------------------------------------------------------[&lt;+>[-]],-----------------------------------------------------------------------------------[&lt;+>[-]],-------------------------------------------------------------------[&lt;+>[-]],------------------------------------------------------------------------------------------------------------------[&lt;+>[-]],-------------------------------------------------[&lt;+>[-]],----------------------------------------------------------------------------------------------------------------[&lt;+>[-]],------------------------------------------------------------------------------------[&lt;+>[-]],[&lt;+>[-]][-]+&lt;[>>>++[>+++[>+++++++++++++++++++&lt;-]&lt;-]>>.-------------.-.&lt;&lt;&lt;&lt;[-]&lt;[-]]>[>>>++[>+++[>+++++++++++++++++&lt;-]&lt;-]>>+.[>+>+&lt;&lt;-]>+++++++++++.>--..&lt;----.&lt;&lt;&lt;[-]]";let r=0,f=0;var i=a.length,c=new Uint8Array(3e4),s="",b=10240,k=0;for(r=0;r&lt;i&amp;&amp;!(b&lt;0);r++)switch(b--,a[r]){case">":f++;break;case"&lt;":f>0&amp;&amp;f--;break;case"+":c[f]=c[f]+1&255;break;case"-":c[f]=c[f]-1&255;break;case".":s+=String.fromCharCode(c[f]);break;case",":k>=e.length?c[f]=0:c[f]=e.charCodeAt(k),k++;break;case"[":if(!c[f])for(var t=0;a[++r];)if("["===a[r])t++;else if("]"===a[r]){if(!(t>0))break;t--}break;case"]":if(c[f])for(t=0;a[--r];)if("]"===a[r])t++;else if("["===a[r]){if(!(t>0))break;t--}}return s}
$(function(){$('#password').keyup(function(e){$('#password').css({'background-color':q($('#password').val())});});});

Aquí empezamos a perder el tiempo (business as usual 🙂 debugueando con las DevTools. Creamos un nuevo snippet, pegamos el código, pulsamos en {} para un pretty-print, insertamos una última línea: console.log( q(‘password’) ), metemos un breakpoint en la línea 2 de q() y
ejecutamos la función paso a paso… Bien, así se podría resolver, pero nos llevaría unas horas… Alguien del grupo, con muy buen criterio, no sólo vió que ese código era BrainFuck, sino que pensó que traducirlo a lenguaje C era un buen primer paso. Clonamos este traductor, lo ejecutamos sobre el BrainFuck y obtenemos este sencillo programa.

Si nos fijamos, vemos el código de varios caracteres ASCII (84, 52, 114…), así que, antes de nada, probamos esa secuencia y… ¡Bingo!

file = open("bf.c", "r")
  for line in file:
    match = re.search(r'tape.*-= ([0-9]*)', line)
    if (match):
      if int(match.group(1)) > 13:
        print(chr(int(match.group(1))), end='')

21 de July de 2019

What am I doing with Tracker?

“Colored net”by Chris Vees (priorité maison) is licensed under CC BY-NC-ND 2.0

Some years ago I was asked to come up with some support for sandboxed apps wrt indexed data. This drummed up into Tracker 2.0 and domain ontologies, allowing those sandboxed apps to keep their own private data and collection of Tracker services to populate it.

Fast forward to today and… this is still largely unused, Tracker-using flatpak applications still whitelist org.freedesktop.Tracker, and are thus allowed to read and change content there. Despite I’ve been told it’s been mostly lack of time… I cannot blame them, domain ontologies offer the perfect isolation at the cost of the perfect duplication. It may do the job, but is far from optimal.

So I got asked again “we have a credible story for sandboxed tracker?”. One way or another, seems we don’t, back to the drawing board.

Somehow, the web world seems to share some problems with our case, and seems to handle it with some degree of success. Let’s have a look at some excerpts of the Sparql 1.1 recommendation (emphasis mine):

RDF is often used to represent, among other things, personal information, social networks, metadata about digital artifacts, as well as to provide a means of integration over disparate sources of information.

A Graph Store is a mutable container of RDF graphs managed by a single service. […] named graphs can be added to or deleted from a Graph Store. […] a Graph Store can keep local copies of RDF graphs defined elsewhere […] independently of the original graph.

The execution of a SERVICE pattern may fail due to several reasons: the remote service may be down, the service IRI may not be dereferenceable, or the endpoint may return an error to the query. […] Queries may explicitly allow failed SERVICE requests with the use of the SILENT keyword. […] (SERVICE pattern) results are returned to the federated query processor and are combined with results from the rest of the query.

So according to Sparql 1.1, we have multiple “Graph Stores” that manage multiple RDF graphs. They may federate queries to other endpoints with disparate RDF formats and whose availability may vary. This remote data is transparent, and may be used directly or processed for local storage.

Let’s look back at Tracker, we have a single Graph Store, which really is not that good at graphs. Responsibility of keeping that data updated is spread across multiple services, and ownership of that data is equally scattered.

It snapped me, if we transpose those same concepts from the web to the network of local services that your session is, we can use those same mechanisms to cut a number of drawbacks short:

  • Ownership is clear: If a service wants to store data, it would get its own Graph Store instead of modifying “the one”. Unless explicitly supported, Graph Stores cannot be updated from the outside.
  • So is lifetime: There’s been debate about whether data indexed “in Tracker” is permanent data or a cache. Everyone would get to decide their best fit, unaffected by everyone else’s decisions. The data from tracker-miners would totally be a cache BTW :).
  • Increases trustability: If Graph Stores cannot be tampered with externally, you can trust their content to represent the best effort of their only producer, instead of the minimum common denominator of all services updating “the Graph Store”.
  • Gives a mechanism for data isolation: Graph Stores may choose limiting the number of graphs seen on queries federated from other services.
  • Is sandboxing friendly: From inside a sandbox, you may get limited access to the other endpoints you see, or to the graphs offered. Updates are also limited by nature.
  • But works the same without a sandbox. It also has some benefits, like reducing data duplication, and make for smaller databases.

Domain ontologies from Tracker 2.0 also handle some of those differently, but very very roughly. So the first thing to do to get to that RDF nirvana was muscling up that Sparql support in Tracker, and so I did! I already had some “how could it be possible to do…” plans in my head to tackle most of those, but unfortunately they require changes to the internal storage format.

As it seems the time to do one (FTR, storage format has been “unchanged” since 0.15) I couldn’t just do the bare minimum work, it seemed too much of a good opportunity to miss, instead of maybe making future changes for leftover Sparql 1.1 syntax support.

Things ended up escalating into https://gitlab.gnome.org/GNOME/tracker/commits/wip/carlosg/sparql1.1, where It can be said that Tracker supports 100% of the Sparql 1.1 syntax. No buts, maybe bugs.

Some notable additions are:

  • Graphs are fully supported there, along with all graph management syntax.
  • Support for query federation through SERVICE {}
  • Data dumping through DESCRIBE and CONSTRUCT query forms.
  • Data loading through LOAD update form.
  • The pesky negated property path operator.
  • Support for rdf:langString and rdf:List
  • All missing builtin functions

This is working well, and is almost drop-in (One’s got to mind the graph semantics), making it material for Gnome 3.34 starts to sound realistic.

As Sparql 1.1 is a recommendation finished in 2013, and no other newer versions seem to be in the works, I think it can be said Tracker is reaching maturity. Only HTTP Graph Store Protocol (because why not) remains the big-ish item to reasonably tell we implement all 11 documents. Note that Tracker’s bet for RDF and Sparql started at a time when 1.0 was the current document and 1.1 just an early draft.

And sandboxing support? You might guess already the features it’ll draw from. It’s coming along, actually using Tracker as described above will go a bit deeper than the required query language syntax, more on that when I have the relevant pieces in place. I just thought I’d stop a moment to announce this huge milestone :).

12 de July de 2019

«La película», un relato corto

Rescato de mis archivos una pequeña historia que escribí en marzo de 1998. Tal vez no sea una gran cosa pero creo que el resultado es bonito.


La tarde prometía ser fantástica. Lo prometía el propio ciclo de cine. Mis amigos estaban haciendo un gran trabajo en aquel cineclub y por fin tenía la oportunidad de ver Dune, una de las películas más especiales del cine de ciencia ficción. Un buen momento para pasarlo bien, un buen momento para conocer el mundo un poco más. No llegaba siquiera a los diez y seis años.

Llegué tarde y sólo pude coger sitio atrás. No era lo mejor pero al menos no me quedé fuera como otros. A mi lado estaba sentado Bearn. Éste Bearn era un tipo muy raro. Demasiado para mí en aquel entonces —ahora el freak soy yo— pero no era mal compañero: divertido y a su rollo. Apenas nos tratamos y supongo que su impresión de mí no debía ser demasiado diferente. Poco importa eso al fin y al cabo. Sólo es relevante el hecho de que él fue el único testigo de mi amargura, aquella tarde que aparece hoy en mi memoria.

No sé exactamente cuando ocurrió. Sólo sabría decir que fue en el segundo curso del BUP. No sé ni en qué mes ni en qué estación. Apenas recuerdo al chico, y de ella, sólo una bruma, un deseo, una imagen de beldad liberada de todo defecto. Como todo recuerdo que se precie.

Supongo que no tardé demasiado tiempo en darme cuenta de la extraña familiaridad de la chica que ocupaba la silla frente mí. Esa capacidad de reconocimiento está activada en modo automático en mí desde entonces. Aunque debo señalar al lector que ahora no hay un motivo especial para ello; simplemente es una costumbre y no me daña. Decía pues que pronto reconocí en los rasgos de aquella chica a los de la persona que me enajenaba desde hacía semanas. Meses. Por entonces no sé si ya le había mostrado mi amor —mi primera y única declaración de amor, un triste y solitario te quiero en la puerta del instituto en el único segundo que pude estar a solas con ella— el caso era que estaba allí y yo no estaba prevenido. Mi inquieto ego de amante juvenil se puso nervioso y mi sentido de observación se agudizó hasta la paranoia. Estaba sola. No podía estar sola una chica así. Nunca lo están. Se mueven a la sombra de un macho cuando no protegidas por la invulnerable empalizada de sus amigas. Y ella estaba sola. Mis ojos rastreaban todo el espacio alrededor suyo, sospechando aun del aire que respiraba. Y vinieron a pararse sobre un tipo sentado a su izquierda. Lo conocía muy poco, parecía buena gente y salió un par de veces con una compañera de clase. Un buen tipo que no encajaba en el asunto. Quedé perplejo cuando comprobé que realmente venían juntos. La película hacía minutos que había comenzado.

El desamor en la juventud es algo muy intenso. Está vacío de toda realidad pero luego, con el tiempo, se añorará la pasión. Cuando los años han quemado el alma el desamor solo es amargura que aviva el fuego. En los malditos años de juventud es una virtud heroica. Tan estúpida como todas brillaba igualmente con desgarradora belleza. Aquella tarde, tal vez de invierno, mi corazón saltó en pedazos prendida la mecha con la chispa de dos manos que se cogieron. Y ninguna era mía. Aquella tarde el mundo se me vino encima, en paralelo al viaje iniciático del joven Atreides. Aquella tarde llené los lagos ocultos de Dune con mis lágrimas. Convidado de piedra, con el cielo rozando la punta de mis dedos, viví mi destierro del corazón en el desierto de un planeta desierto que por grande que hubiera sido nunca llenaría la soledad de mi pobre alma autocompadecida. Esta noche la película era otra. El desierto es el mismo.


FidoNET R34: recuperando correo de las áreas de echomail

Esta entrada ha sido originalmente publicada en el foro de esLibre https://charla.eslib.re.


FidoNet logo

Hace a unos pocos meses me propuse recuperar material digital en mis archivos sobre los primeros años de la comunidad Linux en España, en particular mis archivos de FidoNet. Surgió entonces una conversación en Twitter acerca del echomail R34.Linux y de la posibilidad de recuperar correo de aquella época para rescatarlo y republicarlo:

Me pareció maravillosa la iniciativa de Kishpa_, pero al consultar mis datos encontré que en algún momento sufrí un casque de la base de mensajes y perdí todo el correo de ¿cinco años? o más. El súbito recuerdo de aquel día dolió casi tanto como entonces.

GoldED editor

Dado que muchos de los que anduvimos en los albores de HispaLinux nos movimos a partir de FidoNET mi consulta es la siguiente: ¿por alguna casualidad alguien ha superado las vicisitudes de la persistencia de la información a lo largo de las décadas y dispone de sus archivos FidoNET para recuperarlos y republicarlos? No sólo el correo de los áreas R34.Linux y R34.Unix, donde realmente nació todo, sino cualquier otro correo echomail archivado.

En caso positivo hagámoslo llegar a Kishpa_. Es un proyecto bonito de recuperación de memoria digital aunque sólo sea a efectos de archivo.

Venga: ¡todos a arrebuscar en nuestras copias de seguridad!

Copia de la web de HispaLinux en 1998

Aprovechando la sesión de examen de mis archivos noventeros cuelgo en esta web una instantánea de la web de la asociación HispaLinux de marzo de 1998:

GoldED editor

Ya sabéis: no lo hago por nostalgia sino por memoria digital.

02 de July de 2019

Inaugurado http://charla.eslib.re

comunidad esLibre

Tengo el placer de anunciar que ya está levantado un nuevo foro de discusión web para la comunidad esLibre: https://charla.eslib.re. Este es otro paso más promoviendo la regeneración de lo que fue la comunidad HispaLinux en España en un nuevo futuro:

Charla esLibre

Es obvio que hemos elegido Discourse, el mejor software para mantener foros de discusión hoy día. Y además es software libre. Mi agradecimiento a todo su equipo de desarrollo por el maravilloso producto que han creado.

Además están habilitados grupos de charla en Telegram (https://t.me/esLibre) y en Matrix (#esLibre:matrix.org). Ambos grupos están unidos a través de una pasarela Telegram <-> Matrix.

Gracias a los compis que se han encargado de preparar todos los servicios. Sed bienvenidos.

29 de June de 2019

Now I have a web Solid pod

I’ve just created my Solid pod: https://olea.solid.community/.

Tim Berners-Lee proposes Solid as a way to implement his original vision for the World Wide Web. If timbl says something like this then I’m interested:

Within the Solid ecosystem, you decide where you store your data. Photos you take, comments you write, contacts in your address book, calendar events, how many miles you run each day from your fitness tracker… they’re all stored in your Solid POD. This Solid POD can be in your house or workplace, or with an online Solid POD provider of your choice. Since you own your data, you’re free to move it at any time, without interruption of service.

More details are at https://solid.inrupt.com/how-it-works.

I’ve poked just a bit about what Solid can do. Don’t have many time to do now. It’s nice to check how it’s based on linked data, so the potential applications are infinite. And they have a forum too (running Discourse, ♥).

My IT personal strategy requires to implement my own services as much as I can. Solid has a server implementation available I would like to use somewhere in the future.

Love to see the Semantic Web coming back.

28 de June de 2019

Publicado el vídeo de la charla de presentación de 29110_EPF_library en esLibre 2019

29110_EPF_library

Tengo la enorme satisfacción de anunciar que está publicado el vídeo de la conferencia que impartí en esLibre 2019. Todo el mérito es de César García (elsatch) y está publicado en su canal la Hora Maker. Me hace muchísima ilusión y el resultado creo que queda bastante bien. Gracias César.

Las transparencias de la conferencia también están disponibles en esta web.

18 de June de 2019

esLibre 2019: Wikitatón de patrimonio inmueble histórico de Andalucía

congreso esLibre

En la última entrada ya mencioné que impartiré un par de sesiones en el congreso esLibre del próximo viernes en Granada. Esta entrada está dedicada al al taller práctico Wikitatón de patrimonio inmueble histórico de Andalucía: de Andalucía para España y la Humanidad sencillamente para incluir un listado de enlaces y materiales de interés para el taller. La información es muy esquemática porque sólo está pensado ser usada en ese taller. Aquí va.

Referencias oficiales

Principales servicios Wikimedia de nuestro interés

Material relacionado en los proyectos Wikimedia:

Consultas SPARQL a Wikidata relacionadas:

Otros servicios externos de interés:

Ejemplos de monumentos

Usaremos unos ejemplos como material de referencia. Es muy relevante el de la Alhambra porque es la entrada de la guía de Ándalucía con más datos de todo el catálogo, con mucha ventaja.

Alhambra de Granada

Puente del Hacho

Estación de Renfe de Almería

10 de June de 2019

Participación en el congreso esLibre 2019

congreso esLibre

Hace ya un tiempo que conté que había mandado varias propuestas de actividades para el congreso esLibre, donde volveremos a encontrarnos, entre otros, con los viejos amigos de la época de HispaLinux y, espero, con muchísima gente nueva. Finalmente he descartado una de ellas porque el encuentro dura un sólo día y también quería poder asistir a otras charlas y, sobre todo, alternar con los amigotes.

Estas son las dos:

29110_EPF_library

Como ya dije anteriormente estoy muy ilusionado por presentar los trabajos previos de 29110_EPF_library en el que está centrado el TFG que finalmente presentaré en septiembre, justo antes de la LibreOffice Conference que celebraremos en Almería. Me va a ser muy útil para estructurar cómo mejor comunicar los hallazgos del proyecto y con un poco de suerte recibiré alguna realimentación de utilidad para la memoria final.

Sobre el wikitatón: no vamos a tener mucho tiempo disponible para subir muchos resultados. Si ya estás familiarizado con Wikidata será perfecto y para los demás intentaré reducir la barrera de entrada todo lo posible. No tengo pensado preparar mucho más material que algunos enlaces de referencia, incluyendo alguna consulta al fantástico servicio de consultas SPARQL https://query.wikidata.org. Será una sesión muy interactiva y con un poco de suerte estaremos más de uno para echar una mano a los nuevos. Recordad que es MUY IMPORTANTE traer vuestro propio ordenador. Y si estás familiarizado con «linked-data» y en particular con JSON-LD no dejes de venir, porque podemos necesitar tu ayuda ;-)

El programa del congreso ya está publicado y sólo variará en algún ajuste menor: https://eslib.re/2019/programa/

programa del congreso esLibre

Para los interesados: tenemos un grupo Telegram Hispalinustálgicos al que estáis todos invitadísmos.

Espero que os animéis a venir a Granada. Al margen de los contenidos lo mejor es la audiencia que convoca: la flor y nata del linuxerío español. Y por supuesto la propia ciudad de Granada:

«Dale limosna mujer
que no hay en la vida nada
como la pena de ser
ciego en Granada.
»

25 de May de 2019

Eliminar mensajes antiguos en Gmail

Apunto aquí una receta rápida para que no se me olvide y por si fuera de interés para otras personas.

El problema: uso Gmail desde hace muuuchos años y hasta hoy no me había planteado hacer limpieza. Pero, estaba al 93% de ocupación: 17.74 GB usados de 19GB disponibles, así que me he liado la manta a la cabeza y lo he podido bajar un poco, a 14.34GB = 75%, eliminando todos los mensajes de más de 10 años.

93% de utilización = sí, soy un Diógenes del correo

La solución: básicamente, usar este script en Python. Le he metido los import necesarios y lo he dejado en un Gist. Para que el script funcione, hay que activar IMAP en Gmail y (esto es importante), el acceso a aplicaciones «menos seguras» desde las opciones de seguridad de tu cuenta Google.

Activa Less secure app access.

Si tienes 2-Step Verification, tendrás que desactivarlo momentáneamente. Cuando termines de ejecutar el script, recuerda volver a activarlo (y desactiva el Less secure app access).

75% de ocupación, mucho mejor 🙂

05 de May de 2019

Intellectual property registries

Past Monday we Carlos J. Vives and me gave a talk about Creative Commons and open content in a local education center:

The talk is part of the ccALM Almería Creative Commons Festival.

The only goal of this entry is to collect some links to registration services for IP useful for any digital creator in Internet, particularly for open culture works. As far I remember:

In the past there were this Digital Media Rights service, but seems broken now: http://dmrights.com/

Limited to Spain there is two public managed services:

Some of this services are thought to be used as a legal resource in case of litigation. Others are just an historical record for websites. If you want to use any of it study carefully their features and advantages of your interest.

13 de April de 2019

Catrin Labs Computers: Nueva casa

Vaya! La idea de hacer un computador nuevo pero retro ha tenido una excelente recepción, con ésto comenzó a ser un poco más complicado manejar el feedback desde mi cuenta facebook personal, y al mismo tiempo comencé a recibir inquietudes de gente que no habla español. Es por eso que decidí darle mayor infraestructura a […]

Origen

12 de April de 2019

Alborán BBS: ALBINTRO.ZIP

5 de junio de 1996, Alborán BBS fue la segunda BBS almeriense conectada a FidoNet. Tenía su propia demo promocionala: ALBINTRO.ZIP


Ha llovido un poco desde entonces. Incluso en Almería.

11 de April de 2019

An online discussion about free standards

This is not very important post but I want to leave a record about a particular discusion regarding free access to standars. Particularly the ISO/IEC ones. I’ll not share the link neither mention this sir I confronted:

Original question:

Is there any way to access and get ISO standards for free like sci-hub for scholar papers?

After a previous comment this sir answers:

ISO and ASTM only exist through sales of their standards. Imagine you had to work for no money. Would you be happy with this?

And me, well, I can’t restrain myself…

Very happy, yes!

He answers me:

So you can feed, house, and clothe your family with no income whatsoever?

and then answers to another participant:

Because it’s theft. In certain parts of the Middle East, theft is dealt with by Sharia law. This is my last statement in this question.

And my final arguments:

Dear XXXX: there is a bunch of successful standard bodies publishing documents without royalties and no-cost. Say just the very well known W3C, OASIS, IETF, IEEE or OMG. I bet they are able to fund themselves.

With ISO the case can be see as outraging since their norms (or the national equivalents) are frequently mandatory by local laws and paying for reading law is absolutely unfair.

BTW is nice to see you mention Sharia law. This says a lot about yourself.

The Sharia comment shocked me o_0

I can recognize some trolling tone from me I probably should moderate. Anyway I strongly believe in the background arguments: legal and industrial standards should be, at least, open access.

An absolutelly inspiring analysis about what an open standard really is the 2005 Ken Krechmer paper Open Standard Requirments. Particularly I found this table amazing:

Table 1. Creators, implementers and users see openness differently.
stakeholders
Requirements Creator Implementer User
1 Open Meeting
2 Consensus
3 Due Process
4 One World
5 Open IPR
6 Open Change
7 Open Documents
8 Open Interface
9 Open Access
10 On-going Support


The open standards matter has interested me from many years to now (my friends can remember that old SOOS obssesion of mine). I should find the time to do some research about standardization and how applies to software and open development communities.

10 de April de 2019

Participación en podcast de KDE España «Software Libre y la política»

congreso esLibre

Rubén Gómez ha tenido bien a invitarme junto a Adrián Chaves y a Aleix Pol a participar en la sesión dedicada a software libre y política del podcast de KDE España. He aquí el vídeo de la sesión, que a lo tonto acabó durando dos horas:

Se aprovechó la oportunidad para animar a la gente a participar en los próximos encuentros esLibre en Granada el 21 de junio y el congreso internacional de la comunidad LibreOffice LibOCon 2019 que tendrá lugar en nuestra agradable ciudad de Almería del 10 al 13 de septiembre.

Un gran placer compartir el rato con estos amigos.

Muy relacionadas con el tema recupero las transpas de una vieja conferencia que impartí en varias ocasiones: Administración pública, software libre y Revolución Digital:

Administración pública, software libre y Revolución Digital

Y para acabar, Rubén ha aprovechado la oportunidad para recordarme la entrevistilla que me hicieron en las visita a las minas de Rodalquilar a la sazón del congreso Akademy 2017 que tuvimos el honor de hospedar también en nuestra mediterránea ciudad:

09 de April de 2019

Novedades recientes

Unas cuantas novedades hoy.

congreso esLibre

Por un lado han sido aceptadas mis actividades propuestas para el congreso esLibre:

Tengo muchas ganas de presentar 29100_EPF_library que es el grueso del TFG que defenderé este verano.

Y recordar que la llamada a la participación sigue abierta: aprovechad y nos veremos en Granada.

congreso esLibre

Y novedades sobre el encuentro anual internacional de LibreOffice que celebraremos en Almería en septiembre:

También estamos prepando las actividades en mayo del Festival ccALM Creative Commons Almería 2019 y con seguridad volveremos hacer wikicositas y mapeos guapos. Anímate y trae tus propias propuestas.

Y finalmente la cafrada de la temporada: he creado un bonito repositorio git para alojar el histórico de mi actividad en Google (Google Takeout). Aún está por ver si sirve para algo :-)

08 de April de 2019

Catrin Labs Computers: CLC-88 Micro de 8 bits

Ya entrando en terreno es hora de hacer algunas definiciones para el computador de 8-bits. Una de las decisiones más difíciles es el procesador, ya que cualquier elección deja inmediatamente fuera al 50% de la población. La mayoría de los programadores dominan sólo uno de los dos, el 6502 o el Z-80 dependiendo de la […]

Origen

07 de April de 2019

Catrin Labs Computers: Chips de Video

La parte que no es tan obvia en cuanto a hardware en este proyecto es el chip de video. Si pensamos en usar un chip de video de la época necesariamente nos casaríamos con un tipo de estética, ya sea el attribute clash de Spectrum o los 4 colores por línea del Atari, o la […]

Origen

Catrin Labs Computers

Hace un tiempo atrás 8-bit guy presentó una idea maravillosa, y es diseñar un computador antiguo pero usando tecnología moderna. Es justamente un tema al que le había estado dando vueltas hace tiempo – desde que me puse a programar Prince of Persia para Atari probablemente – y cuando comencé a ver el video sobre […]

Origen

01 de April de 2019

Propuestas para el encuentro esLibre 2019

congreso esLibre

Ya está en marcha el nuevo congreso esLibre, encuentro nacional sobre tecnologías y conocimiento libre que tendrá lugar el próximo 21 de junio en la ciudad que me vio nacer: Granada. y he aprovechado la llamada a la participación para responder con tres propuestas. Veremos si la organización tiene a bien aprobarlas. El fin de esta entrada es simplemente darles eco a las mismas.


Título: HackLab Almería, un modelo de dinamización tecnológica hiperlocal

Formato: charla

Descripción:

Retrospectiva, desde un punto de vista personal, de la experiencia de un modelo de dinamización de guerrilla en provincias en lo denominamos HackLab Almería dedicado a promover la tecnología y conocimiento, especialmente abiertos/libres. Se proporcionarán algunas métricas y lamentaciones.

Una versión previa de las transparencias está disponible en http://olea.org/conferencias/doc-conf-20171107-CubaConf/.


Título: 29110_EPF_library: hacía un cuerpo de conocimiento abierto de prácticas de desarrollo de software adecuadas para muy pequeñas organizaciones

Formato: charla o charla relámpago

Descripción:

Las organizaciones muy pequeñas dedicadas al desarrollo de software tienen un gran problema al querer formalizar controlar la calidad de sus prácticas. Como alternativa pragmática se ha propuesto la familia de normas ISO 29110 para resolver sus trabas.

Haciendo suyos esos fines la iniciativa 29110_EPF_library se ha propuesto como objetivos:

  • formal modeled repository of 29110 processes and related information;
  • development and tailoring framework for 29110 processes adoption;
  • low adoption barriers for VSEs:
    • the opensource library licensing frees from royalties or restrictive use of IP;
    • using opensource tools reduces the costs of acquisition;
  • open community development;
  • acts as a body of knowledge (BoK) for 29110 related content in particular an for software and systems engineering in general.

Puede consultarse una versión previa de 29110_EPF_library en http://olea.org/tmp/Deploy-Pack-29110-EPF_library/


Título: Wikitatón de patrimonio inmueble histórico de Andalucía: de Andalucía para España y la Humanidad

Formato: taller

Descripción:

Taller práctico donde todos trabajaremos para ampliar el registro en los diferentes proyectos de la familia Wikipedia de los elementos registrados en el catálogo oficial andaluz con tareas como:

  • crear y detallar entradas en Wikidata de los diferentes ítemes;
  • localizar, contribuir, anotar y georreferenciar material fotográfico relacionado;
  • ídem con entradas en Wikipedia, con preferencia por las lenguas ibéricas e inglés.

Nos vemos en Granada.

27 de March de 2019

Postfix: Name service error for name=domain.com type=MX: Host not found, try again

I tried to post this in Serverfault but I couldn’t since it’s blocked by their spam detector.

Here is the full text of my question:


Hi:

I’m stuck with a Postfix MX related problem.

I’ve just migrated a very old Centos 5 server to v7 so I’m using postfix-2.10.1-7.el7.x86_64. I’ve upgraded the legacy postfix configuration (maybe the cause of this hell) and other supplementary stuff which seems to work:

  • postfix-perl-scripts-2.10.1-7.el7.x86_64
  • postgrey-1.34-12.el7.noarch
  • amavisd-new-2.11.1-1.el7.noarch
  • spamassassin-3.4.0-4.el7_5.x86_64
  • perl-Mail-SPF-2.8.0-4.el7.noarch
  • perl-Mail-DKIM-0.39-8.el7.noarch
  • dovecot-2.2.36-3.el7.x86_64

After many tribulations I think I got most of the system running except the annoying MX related problems, as (from /var/log/maillog):

Mar 28 14:26:48 tormento postfix/smtpd[1021]: warning: Unable to look up MX host for spmailtechn.com: Host not found, try again
Mar 28 14:26:51 tormento postfix/smtpd[1052]: warning: Unable to look up MX host for inlumine.ual.es: Host not found, try again
Mar 28 14:31:38 tormento postfix/smtpd[1442]: warning: Unable to look up MX host for aol.com: Host not found, try again
Mar 28 13:07:53 tormento postfix/smtpd[26556]: warning: Unable to look up MX host for hotmail.com: Host not found, try again
Mar 28 13:12:06 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for facebookmail.com: Host not found, try again
Mar 28 13:12:31 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for joker.com: Host not found, try again
Mar 28 13:13:02 tormento postfix/smtpd[26650]: warning: Unable to look up MX host for bounce.linkedin.com: Host not found, try again

and:

Mar 28 14:50:36 tormento postfix/smtp[1700]: 7B6C69C6A2: to=<ismael.olea@gmail.com>, orig_to=<ismael@olea.org>, relay=none, delay=1142, delays=1142/0.07/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=gmail.com type=MX: Host not found, try again)
Mar 28 14:32:05 tormento postfix/smtp[1383]: 721A19C688: to=<XXXXX@yahoo.com>, orig_to=<XXXX@olea.org>, relay=none, delay=4742, delays=4742/0/0/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=yahoo.com type=MX: Host not found, try again)

as examples.

The first suspect is DNS resolution but this is working both using Hetztner DNS servers (where machine is host) or 8.8.8.8 or 9.9.9.9:

$ dig mx gmail.com

; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> mx gmail.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20330
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;gmail.com.			IN	MX

;; ANSWER SECTION:
gmail.com.		3014	IN	MX	10 alt1.gmail-smtp-in.l.google.com.
gmail.com.		3014	IN	MX	5 gmail-smtp-in.l.google.com.
gmail.com.		3014	IN	MX	40 alt4.gmail-smtp-in.l.google.com.
gmail.com.		3014	IN	MX	20 alt2.gmail-smtp-in.l.google.com.
gmail.com.		3014	IN	MX	30 alt3.gmail-smtp-in.l.google.com.

;; Query time: 1 msec
;; SERVER: 213.133.100.100#53(213.133.100.100)
;; WHEN: jue mar 28 14:56:00 CET 2019
;; MSG SIZE  rcvd: 161

or:


dig mx  inlumine.ual.es

; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> mx inlumine.ual.es
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38239
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 2, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;inlumine.ual.es.		IN	MX

;; ANSWER SECTION:
inlumine.ual.es.	172800	IN	MX	1 ASPMX.L.GOOGLE.COM.
inlumine.ual.es.	172800	IN	MX	10 ASPMX3.GOOGLEMAIL.COM.
inlumine.ual.es.	172800	IN	MX	10 ASPMX2.GOOGLEMAIL.COM.
inlumine.ual.es.	172800	IN	MX	5 ALT1.ASPMX.L.GOOGLE.COM.
inlumine.ual.es.	172800	IN	MX	5 ALT2.ASPMX.L.GOOGLE.COM.

;; AUTHORITY SECTION:
inlumine.ual.es.	172800	IN	NS	dns.ual.es.
inlumine.ual.es.	172800	IN	NS	alboran.ual.es.

;; Query time: 113 msec
;; SERVER: 213.133.100.100#53(213.133.100.100)
;; WHEN: jue mar 28 14:56:51 CET 2019
;; MSG SIZE  rcvd: 217

my main.cf:

$ postconf -n
address_verify_sender = postmaster@olea.org
alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
body_checks = regexp:/etc/postfix/body_checks.regexp
broken_sasl_auth_clients = yes
canonical_maps = hash:/etc/postfix/canonical
command_directory = /usr/sbin
config_directory = /etc/postfix
content_filter = smtp-amavis:[127.0.0.1]:10024
daemon_directory = /usr/libexec/postfix
data_directory = /var/lib/postfix
debug_peer_level = 2
debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5
header_checks = pcre:/etc/postfix/header_checks.pcre
home_mailbox = Maildir/
html_directory = no
inet_interfaces = all
inet_protocols = ipv4
local_recipient_maps = proxy:unix:passwd.byname $alias_maps
mail_owner = postfix
mailbox_command = /usr/bin/procmail -a "$EXTENSION"
mailbox_size_limit = 200000000
mailq_path = /usr/bin/mailq.postfix
manpage_directory = /usr/share/man
message_size_limit = 30000000
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain, tormento.olea.org, /etc/postfix/localdomains
myhostname = tormento.olea.org
newaliases_path = /usr/bin/newaliases.postfix
policy_time_limit = 3600
queue_directory = /var/spool/postfix
readme_directory = /usr/share/doc/postfix-2.10.1/README_FILES
recipient_delimiter = +
sample_directory = /usr/share/doc/postfix-2.10.1/samples
sendmail_path = /usr/sbin/sendmail.postfix
setgid_group = postdrop
smtp_tls_cert_file = /etc/pki/tls/certs/tormento.olea.org.crt.pem
smtp_tls_key_file = /etc/pki/tls/private/tormento.olea.org.key.pem
smtp_tls_mandatory_protocols = !SSLv2,!SSLv3
smtp_tls_note_starttls_offer = yes
smtp_tls_security_level = may
smtpd_helo_required = yes
smtpd_recipient_restrictions = permit_mynetworks check_client_access hash:/etc/postfix/access permit_sasl_authenticated reject_non_fqdn_recipient reject_non_fqdn_sender reject_rbl_client cbl.abuseat.org reject_rbl_client dnsbl-1.uceprotect.net reject_rbl_client zen.spamhaus.org reject_unauth_destination check_recipient_access hash:/etc/postfix/roleaccount_exceptions reject_multi_recipient_bounce check_helo_access pcre:/etc/postfix/helo_checks.pcre reject_non_fqdn_hostname reject_invalid_hostname check_sender_mx_access cidr:/etc/postfix/bogus_mx.cidr check_sender_access hash:/etc/postfix/rhsbl_sender_exceptions check_policy_service unix:postgrey/socket permit
smtpd_sasl_auth_enable = yes
smtpd_sasl_local_domain = $myhostname, olea.org, cacharreo.club
smtpd_sasl_path = private/auth
smtpd_sasl_security_options = noanonymous
smtpd_sasl_type = dovecot
smtpd_tls_auth_only = no
smtpd_tls_cert_file = /etc/pki/tls/certs/tormento.olea.org.crt.pem
smtpd_tls_key_file = /etc/pki/tls/private/tormento.olea.org.key.pem
smtpd_tls_loglevel = 1
smtpd_tls_mandatory_protocols = TLSv1
smtpd_tls_received_header = yes
smtpd_tls_security_level = may
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom
transport_maps = hash:/etc/postfix/transport
unknown_local_recipient_reject_code = 550
virtual_maps = hash:/etc/postfix/virtual

and my master.cf:

$ postconf -M
smtp       inet  n       -       n       -       -       smtpd
submission inet  n       -       n       -       -       smtpd -o smtpd_tls_security_level=may -o smtpd_sasl_auth_enable=yes -o cleanup_service_name=cleanup_submission -o content_filter=smtp-amavis:[127.0.0.1]:10023
smtps      inet  n       -       n       -       -       smtpd -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes
pickup     unix  n       -       n       60      1       pickup
cleanup    unix  n       -       n       -       0       cleanup
qmgr       unix  n       -       n       300     1       qmgr
tlsmgr     unix  -       -       n       1000?   1       tlsmgr
rewrite    unix  -       -       n       -       -       trivial-rewrite
bounce     unix  -       -       n       -       0       bounce
defer      unix  -       -       n       -       0       bounce
trace      unix  -       -       n       -       0       bounce
verify     unix  -       -       n       -       1       verify
flush      unix  n       -       n       1000?   0       flush
proxymap   unix  -       -       n       -       -       proxymap
proxywrite unix  -       -       n       -       1       proxymap
smtp       unix  -       -       n       -       -       smtp
relay      unix  -       -       n       -       -       smtp -o fallback_relay=
showq      unix  n       -       n       -       -       showq
error      unix  -       -       n       -       -       error
retry      unix  -       -       n       -       -       error
discard    unix  -       -       n       -       -       discard
local      unix  -       n       n       -       -       local
virtual    unix  -       n       n       -       -       virtual
lmtp       unix  -       -       n       -       -       lmtp
anvil      unix  -       -       n       -       1       anvil
scache     unix  -       -       n       -       1       scache
smtp-amavis unix -       -       n       -       2       smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes -o max_use=20
127.0.0.1:10025 inet n   -       n       -       -       smtpd -o content_filter= -o local_recipient_maps= -o relay_recipient_maps= -o smtpd_restriction_classes= -o smtpd_delay_reject=no -o smtpd_client_restrictions=permit_mynetworks,reject -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o mynetworks=127.0.0.0/8 -o strict_rfc821_envelopes=yes -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o smtpd_client_connection_count_limit=0 -o smtpd_client_connection_rate_limit=0 -o receive_override_options=no_header_body_checks,no_unknown_recipient_checks
policy     unix  -       n       n       -       2       spawn user=nobody argv=/usr/bin/perl /usr/share/postfix/policyd-spf-perl

I fear I’m missing something really obvious but I’ve been googling for two days doing any amount of tests and now I don’t know what much to do.

Thanks in advance.


Post data:

Well, this is embarrassing. As I predicted my problem was caused by the most obvious and trivial reason: lack of read access to /etc/resolv.conf for the postfix user o_0

As you probably know the postfix subproceses (smtp, smtpd, qmgr, etc) runs with the postfix user. All the comments and suggestion I’ve received has been related with problems accessing to DNS resolving data and the usual suspects has been SELinux or a chrooted postfix. You all were right in the final reason. Following an advice and tried:

# sudo -u postfix -H cat /etc/resolv.conf
cat: /etc/resolv.conf: Permission denied

So… What??

# ls -l /etc/resolv.conf
-rw-r-----. 1 root named 118 mar 28 20:34 /etc/resolv.conf

OMG!… then after a chmod o+r and restarting Postfix all the email on hold can be processed and sent and new mail is processed as expected.

I doubt I’ve changed the resolv.conf reading permissions but I can’t be 100% sure. So finally the problem is fixed and I’m very sorry for stole the attention of all of you for this ridiculous reason.

Thanks you all.

31 de January de 2019

A mutter and gnome-shell update

Some personal highlights:

Emoji OSK

The reworked OSK was featured a couple of cycles ago, but a notable thing that was still missing from the design reference was emoji input.

No more, sitting in a branch as of yet:

This UI feeds from the same emoji list than GtkEmojiChooser, and applies the same categorization/grouping, all the additional variants to an emoji are available as a popover. There’s also a (less catchy) keypad UI in place, ultimately hooked to applications through the GtkInputPurpose.

I do expect this to be in place for 3.32 for the Wayland session.

X11 vs Wayland

Ever since the wayland work started on mutter, there’s been ideas and talks about how mutter “core” should become detached of X11 code. It has been a long and slow process, every design decision has been directed towards this goal, we leaped forward on 2017 GSOC, and eg. Georges sums up some of his own recent work in this area.

For me it started with a “Hey, I think we are not that far off” comment in #gnome-shell earlier this cycle. Famous last words. After rewriting several, many, seemingly unrelated subsystems, and shuffling things here and there, and there we are to a point where gnome-shell might run with --no-x11 set. A little push more and we will be able to launch mutter as a pure wayland compositor that just spawns Xwayland on demand.

What’s after that? It’s certainly an important milestone but by no means we are done here. Also, gnome-settings-daemon consists for the most part X11 clients, which spoils the fun by requiring Xwayland very early in a real session, guess what’s next!

At the moment about 80% of the patches have been merged. I cannot assure at this point will all be in place for 3.32, but 3.34 most surely. But here’s a small yet extreme proof of work:

Performance

It’s been nice to see some of the performance improvements I did last cycle being finally merged. Some notable ones, like that one that stopped triggering full surface redraws on every surface invalidation. Also managed to get some blocking operations out of the main loop, which should fix many of the seemingly random stalls some people were seeing.

Those are already in 3.31.x, with many other nice fixes in this area from Georges, Daniel Van Vugt et al.

Fosdem

As a minor note, I will be attending Fosdem and the GTK+ Hackfest happening right after. Feel free to say hi or find Wally, whatever comes first.

29 de January de 2019

Working on the Chromium Servicification Project

Igalia & ChromiumIt’s been a few months already since I (re)joined Igalia as part of its Chromium team and I couldn’t be happier about it: right since the very first day, I felt perfectly integrated as part of the team that I’d be part of and quickly started making my way through the -fully upstream- project that would keep me busy during the following months: the Chromium Servicification Project.

But what is this “Chromium servicification project“? Well, according to the Wiktionary the word “servicification” means, applied to computing, “the migration from monolithic legacy applications to service-based components and solutions”, which is exactly what this project is about: as described in the Chromium servicification project’s website, the whole purpose behind this idea is “to migrate the code base to a more modular, service-oriented architecture”, in order to “produce reusable and decoupled components while also reducing duplication”.

Doing so would not only make Chromium a more manageable project from a source code-related point of view and create better and more stable interfaces to embed chromium from different projects, but should also enable teams to experiment with new features by combining these services in different ways, as well as to ship different products based in Chromium without having to bundle the whole world just to provide a particular set of features. 

For instance, as Camille Lamy put it in the talk delivered (slides here) during the latest Web Engines Hackfest,  “it might be interesting long term that the user only downloads the bits of the app they need so, for instance, if you have a very low-end phone, support for VR is probably not very useful for you”. This is of course not the current status of things yet (right now everything is bundled into a big executable), but it’s still a good way to visualise where this idea of moving to a services-oriented architecture should take us in the long run.

Chromium Servicification Layers

With this in mind, the idea behind this project would be to work on the migration of the different parts of Chromium depending on those components that are being converted into services, which would be part of a “foundation” base layer providing the core services that any application, framework or runtime build on top of chromium would need.

As you can imagine, the whole idea of refactoring such an enormous code base like Chromium’s is daunting and a lot of work, especially considering that currently ongoing efforts can’t simply be stopped just to perform this migration, and that is where our focus is currently aimed at: we integrate with different teams from the Chromium project working on the migration of those components into services, and we make sure that the clients of their old APIs move away from them and use the new services’ APIs instead, while keeping everything running normally in the meantime.

At the beginning, we started working on the migration to the Network Service (which allows to run Chromium’s network stack even without a browser) and managed to get it shipped in Chromium Beta by early October already, which was a pretty big deal as far as I understand. In my particular case, that stage was a very short ride since such migration was nearly done by the time I joined Igalia, but still something worth mentioning due to the impact it had in the project, for extra context.

After that, our team started working on the migration of the Identity service, where the main idea is to encapsulate the functionality of accessing the user’s identities right through this service, so that one day this logic can be run outside of the browser process. One interesting bit about this migration is that this particular functionality (largely implemented inside the sign-in component) has historically been located quite high up in the stack, and yet it’s now being pushed all the way down into that “foundation” base layer, as a core service. That’s probably one of the factors contributing to making this migration quite complicated, but everyone involved is being very dedicated and has been very helpful so far, so I’m confident we’ll get there in a reasonable time frame.

If you’re curious enough, though, you can check this status report for the Identity service, where you can see the evolution of this particular migration, along with the impact our team had since we started working on this part, back on early October. There are more reports and more information in the mailing list for the Identity service, so feel free to check it out and/or subscribe there if you like.

One clarification is needed, tough: for now, the scope of this migrations is focused on using the public C++ APIs that such services expose (see //services/<service_name>/public/cpp), but in the long run the idea is that those services will also provide Mojo interfaces. That will enable using their functionality regardless of whether you’re running those services as part of the browser’s process, or inside their own & separate processes, which will then allow the flexibility that chromium will need to run smoothly and safely in different kind of environments, from the least constrained ones to others with a less favourable set of resources at their disposal.

And this is it for now, I think. I was really looking forward to writing a status update about what I’ve been up to in the past months and here it is, even though it’s not the shortest of all reports.

FOSDEM 2019

One last thing, though: as usual, I’m going to FOSDEM this year as well, along with a bunch of colleagues & friends from Igalia, so please feel free to drop me/us a line if you want to chat and/or hangout, either to talk about work-related matters or anything else really.

And, of course, I’d be also more than happy to talk about any of the open job positions at Igalia, should you consider applying. There are quite a few of them available at the moment for all kind of things (most of them available for remote work): from more technical roles such as graphicscompilersmultimedia, JavaScript engines, browsers (WebKitChromium, Web Platform) or systems administration (this one not available for remotes, though), to other less “hands-on” types of roles like developer advocatesales engineer or project manager, so it’s possible there’s something interesting for you if you’re considering to join such an special company like this one.

See you in FOSDEM!

23 de January de 2019

WORA-WNLF


I started my career writing web applications. I had struggles with PHP web-frameworks, javascript libraries, and rendering differences (CSS and non-CSS glitches) across browsers. After leaving that world, I started focusing more on the backend side of things, fleeing from the frontend camp (mainly actually just scared of that abomination that was javascript; because, in my spare time, I still did things with frontends: I hacked on a GTK media player called Banshee and a GTK chat app called Smuxi).

So there you had me: a backend dev by day, desktop dev by night. But in the GTK world I had similar struggles as the ones I had as a frontend dev when the browsers wouldn’t behave in the same way. I’m talking about GTK bugs in other non-Linux OSs, i.e. Mac and Windows.

See, I wanted to bring a desktop app to the masses, but these problems (and others of different kinds) prevented me to do it. And while all this was happening, another major shift was happening as well: desktop environments were fading while mobile (and not so mobile: tablets!) platforms were rising in usage. This meant yet more platforms that I wished GTK supported. As I’m not a C language expert (nor I wanted to be), I kept googling for the terms “gtk” and “android” or “gtk” and “iOS”, to see if some hacker put something together that I could use. But that day never happened.

Plus, I started noticing a trend: big companies with important mobile apps started to stop using HTML5 within their apps in favour of native apps, mainly chasing the “native look & feel”. This meant, clearly, that even if someone cooked a hack that made gtk+ run in Android, it would still feel foreign, and nobody would dare to use it.

So I started to become a fan of abstraction layers that were a common denominator of different native toolkits and kept their native look&feel. For example, XWT, the widget toolkit that Mono uses in MonoDevelop to target all 3 toolkits depending on the platform: Cocoa (on macOS), Gtk (on Linux) and WPF (on Windows). Pretty cool hack if you ask me. But using this would contradict my desires of using a toolkit that would already support Android!

And there it was Xamarin.Forms, an abstraction layer between iOS, Android and WindowsPhone, but that didn’t support desktops. Plus, at the time, Xamarin was proprietary (and I didn’t want to get out of my open source world). It was a big dilemma.

But then, some years passed, and many events happened around Xamarin.Forms:
  • Xamarin (the company) was bought by Microsoft and, at the same time, Xamarin (the product) was open sourced.
  • Xamarin.Forms is opensource now (TBH not sure if it was proprietary before, or it was always opensource).
  • Xamarin.Forms started supporting macOS and Windows UWP.
  • Xamarin.Forms 3.0 included support for GTK and WPF.

So that was the last straw that made me switch completely all my desktop efforts toward Xamarin.Forms. Not only I can still target Linux+GTK (my favorite platform), I can also make my apps run in mobile platforms, and desktop OSs that most people use. So both my niche and mainstream covered! But this is not the end: Xamarin.Forms has been recently ported to Tizen too! (A Linux-based OS used by Samsung in SmartTVs and watches.)

Now let me ask you something. Do you know of any graphical toolkit that allows you to target 6 different platforms with the same codebase? I repeat: Linux(GTK), Windows(UWP/WPF), macOS, iOS, Android, Tizen. The old Java saying is finally here! (but for the frontend side): “write once, run anywhere” (WORA) to which I add “with native look’n’feel” (WORA-WNLF)

If you want to know who is the hero that made the GTK driver of Xamarin.Forms, follow @jsuarezruiz which BTW has been recently hired by Microsoft to work on their non-Windows IDE ;-)

PS: If you like .NET and GTK, my employer is also hiring! (remote positions might be available too) ping me 

11 de January de 2019

«de Par en Par»: hacia un encuentro nacional de comunidad en tecnología y procomunes

Primero una breve nota de descargo: hace unos años, conforme retomaba vida activa decidí escribir más, pero sólo cuando podía aportar contenidos sustantivos. Pero la escasez de mi escritura durante 2018 hasta hoy mismo se ha debido por la falta de tiempo: muchísimo trabajo hecho, parte del cual habría merecido más eco en este medio. En esta entrada sólo quiero expresar algunos pensamientos relacionados con un evento nacional que afortunadamente se está constituyendo mientras escribo estas líneas. Me encantaría que se acabase llamando «de Par en Par».

Reinicio de la comunidad HispaLinux

decadente HispaLinuxUn rápido antecedente: en un arrebato de nostalgia e ilusión y de forma espontánea tras un encuentro puntual a finales de enero a @SorayaMuoz y a @Juantomas les dio por querer celebrar el 20 aniversario de la fundación de la asociación HispaLinux (algo tarde, puesto que acaba de cumplir 21) y convocar a los viejos amigos y compañeros de batalla de aquella época. Es triste reconocerlo pero uno ya habla de los recuerdos como el señor mayor que nunca te habías imaginado serías.

ficha de registro de la asociación HispaLinux

jóvenes de casi 50 años recordando aventuritas Lo importante es que es que convocaron a algunos amigos, crearon un grupo Telegram y arramblaron con las correspondientes agendas de contactos para traer a toda la peña de por entonces. En un par de días ya éramos más de 100 miembros y subiendo. Y nos sigue faltando gente, #OjO. Inmediatamente se planteó alguna clase de encuentro y cual caprae in monte surgen propuestas. Una de ellas, la de interés en este artículo, sería un futuro, deseable, encuentro tecnológico heredero de los añorados congresos HispaLinux. Hasta ahora se están configurando dos propuestas y pronto sabremos más.

Denominación del encuentro

Con la vista puesta en futuras repeticiones anuales quisiera proponer un nombre nuevo: «de Par en Par». ¿Por qué? En muchos sentidos estamos recuperando el espíritu, comunidad y valores de los viejos congresos HispaLinux que por entonces sirvieron de revulsivo para una comunidad ávida e inquieta y antecedido la actual abundante dinámica de encuentros y congresos tecnológicos por toda España. Yo mismo fui promotor, colaborador u organizador de aquellos encuentros.

Hoy por tanto parecería muy apropiado recuperar aquel nombre. Personalmente creo que ya no es adecuado ni conveniente:

  • la denominación HispaLinux está quemada, la última temporada activa de la asociación de la que tomó el nombre el congreso estuvo marcada por un declive a la vista de todos, algo normal aunque razonable dentro de la dinámica asociacionista;

  • sin embargo la ¿última? junta directiva se hizo cargo precipitó a la asociación hacia la absoluta irrelevancia, incluso cerrando servicios disponibles a los socios y, lo más terrible, interrumpiendo la comunicación con el grueso de los asociados y por extensión la desaparición de la actividad de representación democrática, sin asambleas generales públicamente conocidas ni otras acciones relevantes; en mi opinión lo más sensato es alejarse de esas personas y en mi fuero interno sólo desearía castigarlos con el látigo de la indiferencia;

  • en mi opinión la marca «Linux» ya no tiene la fuerza e impacto de, especialmente, la primera década de los 2000; en cambio se ha «commoditizado»: ya no puede parecer tan sectario para algunos como lo fue en el pasado, está tan ampliamente adoptado en la industria que prácticamente tales tecnologías se dan por supuestas en la mayoría de los ámbitos de las TIC; y esto no es menos que maravilloso, pero como marca o denominación ya no le observo el gancho rupturista del pasado;

  • además, la evolución de las comunidades FLOSS ya va mucho más lejos que los ámbitos de los sistemas Linux, los sistemas operativos, la comunidad GNU, etc, etc: no sólo hay cantidades ingentes de productos software libres que corren en otros sistemas (Android, Windows, iPhone…), que están apadrinadas en comunidades estrictamente no relacionadas con Linux (algunas inmensas como Apache o Eclipse), otras en pleno auge, transversales, alrededor de lenguajes y marcos de programación sino que excede al mundo del software a los contenidos libres y abiertos que van desde Wikipedia, Creative Commons, OpenStreetMap, modelos 3D… hasta las cada vez más abundantes fuentes de datos abiertos;

  • y en la propia evolución de la actividad de HispaLinux ya hubo un cambio de foco importantísimo: hacia la protección de los derechos digitales y los marcos legales para la sociedad digital tanto para construir un patrimonio común de software como para alimentar procomunes inmateriales como la innovación (ejemplo: lucha contra las patentes software), la seguridad en las TIC, la privacidad, el anónimato personales en Internet, etc;

  • finalmente, la denominación HispaLinux es muy reconocida e incluso querida por quienes vivimos aquellos tiempos más intensamente… y ya no somos los más jóvenes pero ¿sirve de atractivo para los demás?; sin querer renunciar a este nuestro público propongo abrirse a todo el público actual, más grande, preparado y diverso que nunca.

¿Por qué «de Par en Par»?

  • porque es un encuentro de la comunidad, por la comunidad y para la comunidad;

  • porque la mejor meritocracia del hackerismo se basa en la igualdad y así nos relacionamos, así colaboramos: entre pares;

  • y porque de par en para es estar abierto: abiertos a los marcos de propiedad intelectual y reúso que creemos justos e imprescindibles para la sociedad digital actual, abiertos a todos los productos digitales e intelectuales creados en dichos marcos y porque como comunidad estamos abiertos a nuevas incorporaciones: no queremos cooptación, eres uno más porque lo deseas.

Somos iguales. Transversales. Todo es abierto. Vivimos de par en par.

de Par en Par

08 de January de 2019

Epiphany automation mode

Last week I finally found some time to add the automation mode to Epiphany, that allows to run automated tests using WebDriver. It’s important to note that the automation mode is not expected to be used by users or applications to control the browser remotely, but only by WebDriver automated tests. For that reason, the automation mode is incompatible with a primary user profile. There are a few other things affected by the auotmation mode:

  • There’s no persistency. A private profile is created in tmp and only ephemeral web contexts are used.
  • URL entry is not editable, since users are not expected to interact with the browser.
  • An info bar is shown to notify the user that the browser is being controlled by automation.
  • The window decoration is orange to make it even clearer that the browser is running in automation mode.

So, how can I write tests to be run in Epiphany? First, you need to install a recently enough selenium. For now, only the python API is supported. Selenium doesn’t have an Epiphany driver, but the WebKitGTK driver can be used with any WebKitGTK+ based browser, by providing the browser information as part of session capabilities.

from selenium import webdriver

options = webdriver.WebKitGTKOptions()
options.binary_location = 'epiphany'
options.add_argument('--automation-mode')
options.set_capability('browserName', 'Epiphany')
options.set_capability('version', '3.31.4')

ephy = webdriver.WebKitGTK(options=options, desired_capabilities={})
ephy.get('http://www.webkitgtk.org')
ephy.quit()

This is a very simple example that just opens Epiphany in automation mode, loads http://www.webkitgtk.org and closes Epiphany. A few comments about the example:

  • Version 3.31.4 will be the first one including the automation mode.
  • The parameter desired_capabilities shouldn’t be needed, but there’s a bug in selenium that has been fixed very recently.
  • WebKitGTKOptions.set_capability was added in selenium 3.14, if you have an older version you can use the following snippet instead
from selenium import webdriver

options = webdriver.WebKitGTKOptions()
options.binary_location = 'epiphany'
options.add_argument('--automation-mode')
capabilities = options.to_capabilities()
capabilities['browserName'] = 'Epiphany'
capabilities['version'] = '3.31.4'

ephy = webdriver.WebKitGTK(desired_capabilities=capabilities)
ephy.get('http://www.webkitgtk.org')
ephy.quit()

To simplify the driver instantation you can create your own Epiphany driver derived from the WebKitGTK one:

from selenium import webdriver

class Epiphany(webdriver.WebKitGTK):
    def __init__(self):
        options = webdriver.WebKitGTKOptions()
        options.binary_location = 'epiphany'
        options.add_argument('--automation-mode')
        options.set_capability('browserName', 'Epiphany')
        options.set_capability('version', '3.31.4')

        webdriver.WebKitGTK.__init__(self, options=options, desired_capabilities={})

ephy = Epiphany()
ephy.get('http://www.webkitgtk.org')
ephy.quit()

The same for selenium < 3.14

from selenium import webdriver

class Epiphany(webdriver.WebKitGTK):
    def __init__(self):
        options = webdriver.WebKitGTKOptions()
        options.binary_location = 'epiphany'
        options.add_argument('--automation-mode')
        capabilities = options.to_capabilities()
        capabilities['browserName'] = 'Epiphany'
        capabilities['version'] = '3.31.4'

        webdriver.WebKitGTK.__init__(self, desired_capabilities=capabilities)

ephy = Epiphany()
ephy.get('http://www.webkitgtk.org')
ephy.quit()

31 de December de 2018

21 de December de 2018

Importar JSON en MySQL usando MySQL Shell

La utilidad MySQL Shell nos permite importar un fichero JSON en una tabla o colección de MySQL.

Primero debemos activar el protocolo mysqlX :

$ mysqlsh -u root -h localhost --mysql --dba enableXProtocol
Please provide the password for 'root@localhost':
Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No):
enableXProtocol: Installing plugin mysqlx…
enableXProtocol: done

Y ahora ya podemos conectar con el servidor MySQL usando MySQLShell (y el protocolo mysqlX) :

$ mysqlsh -u root -h localhost --mysqlx

Tengo creada una base de datos llamada addi, vacía, y quiero importar ahí el fichero result.json en una colección de nombre addi_collection.

El comando a ejecutar sería :

MySQL Shell > util.importJson("result.json", {schema: "addi", collection: "addi_collection"});
Importing from file "result.json" to collection <code>addi</code>.<code>addi_collection</code> in MySQL Server at localhost:33060

El problema que tuve es que mi fichero json no tenía un campo _id único en cada registro (ver post anterior de ikasten.io), así que tuve que crearlo. Esto no sería un problema en MySQL Server > 8.0, pero estoy usando un server viejuno (5.7.19), así que obtuve este error:

Processed 182.22 KB in 80 documents in 0.0340 sec (2.35K documents/s)
Total successfully imported documents 0 (0.00 documents/s)
Document is missing a required field (MySQL Error 5115)

Tras añadir el campo _id a todos los registros, pude importar sin problemas:

util.importJson("result.json", {schema: "addi", collection: "addi_collection"});
Importing from file "result.json" to collection <code>addi</code>.<code>addi_collection</code> in MySQL Server at localhost:33060
.. 80.. 80
 Processed 182.93 KB in 80 documents in 0.0379 sec (2.11K documents/s)
 Total successfully imported documents 80 (2.11K documents/s)

Más info sobre JSON import utility en MySQL Shell.

El resultado de la importación se guarda en una colección que recuerda a las colecciones de MongoDB

20 de December de 2018

Buscar y reemplazar con valores incrementales en Vim

Supongamos que tenemos un fichero JSON como el siguiente:

{ "clave1" : "valor11", "clave2": "valor12", … }
{ "clave1" : "valor21", "clave2": "valor22", … }
…
{ "clave1" : "valorN1", "clave2": "valorN2", … }

y queremos añadir un campo nuevo al comienzo, con un _id incremental, para que quede así:

{ "_id" : 1, "clave1" : "valor11", "clave2": "valor12", … }
{ "_id" : 2, "clave1" : "valor21", "clave2": "valor22", … }
…
{ "_id" : n, "clave1" : "valorN1", "clave2": "valorN2", … }

En Vim podremos hacerlo definiendo una función:

:let g:incr = 0 
:function Incr() 
:let g:incr = g:incr + 1 
:return g:incr   
:endfu

Una vez definida la función Incr(), podremos invocarla en una orden find&replace con el operador \= que permite evaluar expresiones y hacer la sustitución que buscamos:

Es decir:

:%s/^{/\="{\"_id\":" . Incr() . ","/gc

:%s/cadena_a_buscar/cadena_sustituta/gc

Cadena a buscar: ^{ (que empiece por {)
Cadena sustituta: =«{\»_id\»:» . Incr() . «,» (es decir, evaluar la expresión «_id\»:» . Incr() . «,», que inicialmente será «_id»:1 )
/gc : Cambios globales (a todo el documento, no sólo la primera aparición) y con confirmación (puedes pulsar la tecla «a» (all) cuando veas que los cambios son correctos tras las primeras sustituciones)

Si quieres más info sobre funciones y el lenguaje VimScript, échale un vistazo a este tutorial.

15 de December de 2018

Desactivar Command+c en VirtualBox para macOS

Un tip rápido que me tenía intrigado desde hace tiempo. Si usas VirtualBox en macOS, seguro que al tener una máquina virtual lanzada has pulsado sin querer Command+c (⌘+c) para copiar texto (la combinación por defecto en macOS) en lugar de Ctrl+C (la combinación por defecto en Linux y Windows). El problema es que en VirtualBox la combinación Command+C escala el tamaño de la pantalla (¡y la hace minúscula!). Para desactivar este molesto comportamiento, basta con entrar en las preferencias de VirtualBox (pulsa ⌘ + ,), pestaña Input, pestaña VirtualMachine, pulsa sobre ScaledMode y elimina el dichoso shortcut.

¡ Adiós ⌘+C !

25 de November de 2018

Frogr 1.5 released

It’s almost one year later and, despite the acquisition by SmugMug a few months ago and the predictions from some people that it would mean me stopping from using Flickr & maintaining Frogr, here comes the new release of frogr 1.5.Frogr 1.5 screenshot

Not many changes this time, but some of them hopefully still useful for some people, such as the empty initial state that is now shown when you don’t have any pictures, as requested a while ago already by Nick Richards (thanks Nick!), or the removal of the applications menu from the shell’s top panel (now integrated in the hamburger menu), in line with the “App Menu Retirement” initiative.

Then there were some fixes here and there as usual, and quite so many updates to the translations this time, including a brand new translation to Icelandic! (thanks Sveinn).

So this is it this time, I’m afraid. Sorry there’s not much to report and sorry as well for the long time that took me to do this release, but this past year has been pretty busy between hectic work at Endless the first time of the year, a whole international relocation with my family to move back to Spain during the summer and me getting back to work at Igalia as part of the Chromium team, where I’m currently pretty busy working on the Chromium Servicification project (which is material for a completely different blog post of course).

Anyway, last but not least, feel free to grab frogr from the usual places as outlined in its main website, among which I’d recommend the Flatpak method, either via GNOME Software  or from the command line by just doing this:

flatpak install --from \
    https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

For more information just check the main website, which I also updated to this latest release, and don’t hesitate to reach out if you have any questions or comments.

Hope you enjoy it. Thanks!

15 de November de 2018

On the track for 3.32

It happens sneakily, but there’s more things going on in the Tracker front than the occasional fallout. Yesterday 2.2.0-alpha1 was released, containing some notable changes.

On and off during the last year, I’ve been working on a massive rework of the SPARQL parser. The current parser was fairly solid, but hard to extend for some of the syntax in the SPARQL 1.1 spec. After multiple attempts and failures at implementing property paths, I convinced myself this was the way forward.

The main difference is that the previous parser was more of a serializer to SQL, just minimal state was preserved across the operation. The new parser does construct an expression tree so that nodes may be shuffled/reevaluated. This allows some sweet things:

  • Property paths are a nice resource to write more idiomatic SPARQL, most property path operators are within reach now. There’s currently support for sequence paths:

    # Get all files in my homedir
    SELECT ?elem {
      ?elem nfo:belongsToContainer/nie:url 'file:///home/carlos'
    }
    


    And inverse paths:

    # Get all files in my homedir by inverting
    # the child to container relation
    SELECT ?elem {
      ?homedir nie:url 'file:///home/carlos' ;
               ^nfo:belongsToContainer ?elem
    }
    

    There’s harder ones like + and * that will require recursive selects, and there’s the negation (!) operator which is not possible to implement yet.

  • We now have prepared statements! A TrackerSparqlStatement object was introduced, capable of holding a query with parameters which can be set/replaced prior to execution.

    conn = tracker_sparql_connection_get (NULL, NULL);
    stmt = tracker_sparql_connection_query_statement (conn,
                                                      "SELECT ?u { ?u fts:match ~term }",
                                                      NULL, NULL);
    
    tracker_sparql_statement_bind_string (stmt, "term", search_term);
    cursor = tracker_sparql_statement_execute (stmt, NULL, NULL);
    

    This is a long sought protection for injections. The object is cacheable and can service multiple cursors asynchronously, so it will also be an improvement for frequent queries.

  • More concise SQL is generated at places, which brings slight improvements on SQLite query planning.

This also got the ideas churning towards future plans, the trend being a generic triple store as much sparql1.1 capable as possible. There’s also some ideas about better data isolation for Flatpak and sandboxes in general (seeing the currently supported approach didn’t catch on). Those will eventually happen in this or following cycles, but I’ll reserve that for other blog post.

An eye was kept on memory usage too (mostly unrealized ideas from the performance hackfest earlier this year), tracker-store has been made to automatically shutdown when unneeded (ideally most of the time, since it just takes care of updates and the unruly apps that use the bus connection), and tracker-miner-fs took over the functionality of tracker-miner-apps. That’s 2 processes less in your default session.

In general, we’re on the way to an exciting release, and there’s more to come!

13 de November de 2018

Degree final work about ISO/IEC 29110

Cover of «Creation of artifacts for adoption of ISO/IEC 29110 standards» blueprint

I want a lot to write more in this blog. There are matters I didn’t talk enough about SuperSEC or GUADEC conferences, some announce for 2019 and some some activities in Wikipedia (specially in the Wikiproyecto-Almería and my firsts step in the amazing world of SPARQL), less important but I really enjoy.

But now I want to keep record of significant advances in the university degree I’m finishing these months. I decided to finish a pending course with special interest in the required degree final work, to work in things I’ve been interested since 2003 but never had the oportunity to focus in deep enough to study, learn and write some useful, I hope, tools. And it’s being fun :-)

29110 Galore at http://29110.olea.org

So now I can say the project blueprint has been approved by the university. It’s named «Creation of artifacts for adoption of ISO/IEC 29110 standards» (document in Spanish, sorry) and the goals are to produce a set of opensource artifacts for the adoption of the 29110 family of standards focused on a light software engineering methodology suitable to be adopted by very small entities (VSEs). At the moment my main target is to work in the «Part 5-4: Agile software development guidelines», currently on development by WG24, using the EPF Composer tool.

As a working tool I’m making a (half backed and maybe temporal) website to keep record of related materials at http://29110.olea.org.

Hope to announce related news in the next weeks.

01 de November de 2018

Running EPF Composer in Fedora Linux, v3

Well, finally I succeed with native instalation of the EPF (Eclipse Process Framework) Composer in my Linux system thanks to Bruce MacIsaac and the development team help. I’m happy. This is not trivial since EPFC is a 32 bits application running in a modern 64 bits Linux system.

My working configuration:

  • Fedora F28, x86_64
  • java-1.8.0-oracle-1.8.0.181, 32 bits, from the non-free Russian Fedora repository:
    • java-1.8.0-oracle-1.8.0.181-3.fc28.i586.rpm
    • java-1.8.0-oracle-headless-1.8.0.181-3.fc28.i586.rpm
  • EPF Composer Linux/GTK 1.5.2
  • GTK+ v.2 integration dependencies (from main Fedora repository):
    • adwaita-gtk2-theme-3.28-1.fc28.i686.rpm
    • libcanberra-gtk2-0.30-16.fc28.i686.rpm
  • xulrunner 32 bits xulrunner-10.0.2.en-US.linux-i686.tar.bz2
  • libXt-1.1.5-7.fc28.i686.rpm (from main Fedora repository).

In my system obviously I can install all rpm packages using DNF. For different distros look for the equivalent packages.

Maybe I’m missing some minor dependency, I didn’t checked in a clean instalation.

Download EPFC and xulrunner and extract each one in the path of your choice. I’m using xulrunner-10.0.2.en-US.linux-i686/ as directory name to be more meaninful.

The contents of epf.ini file:

-data
@user.home/EPF/workspace.152
-vmargs
-Xms64m
-Xmx512m
-Dorg.eclipse.swt.browser.XULRunnerPath=/PATHTOXULRUNNER/xulrunner-10.0.2.en-US.linux-i686/

I had to write the full system path for the -Dorg.eclipse.swt.browser.XULRunnerPath property to get Eclipse recognize it.

And to run EPF Composer:

cd $EPF_APP_DIR
$ epf -vm  /usr/lib/jvm/java-1.8.0-oracle-1.8.0.181/jre/bin/java  

If you want some non trivial work with Composer in Linux you’ll need xulrunner since it’s used extensively for editing contents.

Native Linux EPF Composer screenshot

I had success running the Windows EPF version using Wine and I can do some work with it, but at some point the program gets inestable and needs to reboot. Other very interesting advantage of running native is I can use the GTK+ filechooser which is really lot better than the simpler native Java one.

I plan to practice a lot modeling with EPF Composer in the coming weeks. Hopefully I’ll share some new artifacts authored by me.

PD: added the required libXt dependency.

25 de October de 2018

3 events in a month

As part of my job at Igalia, I have been attending 2-3 events per year. My role mostly as a Chromium stack engineer is not usually much demanding regarding conference trips, but they are quite important as an opportunity to meet collaborators and project mates.

This month has been a bit different, as I ended up visiting Santa Clara LG Silicon Valley Lab in California, Igalia headquarters in A Coruña, and Dresden. It was mostly because I got involved in the discussions for the web runtime implementation being developed by Igalia for AGL.

AGL f2f at LGSVL

It is always great to visit LG Silicon Valley Lab (Santa Clara, US), where my team is located. I have been participating for 6 years in the development of the webOS web stack you can most prominently enjoy in LG webOS smart TV.

One of the goals for next months at AGL is providing an efficient web runtime. In LGSVL we have been developing and maintaining WAM, the webOS web runtime. And as it was released with an open source license in webOS Open Source Edition, it looked like a great match for AGL. So my team did a proof of concept in May and it was succesful. At the same time Igalia has been working on porting Chromium browser to AGL. So, after some discussions AGL approved sponsoring my company, Igalia for porting the LG webOS web runtime to AGL.

As LGSVL was hosting the september 2018 AGL f2f meeting, Igalia sponsored my trip to the event.

AGL f2f Santa Clara 2018, AGL wiki CC BY 4.0

So we took the opportunity to continue discussions and progress in the development of the WAM AGL port. And, as we expected, it was quite beneficial to unblock tasks like AGL app framework security integration, and the support of AGL latest official release, Funky Flounder. Julie Kim from Igalia attended the event too, and presented an update on the progress of the Ozone Wayland port.

The organization and the venue were great. Thanks to LGSVL!

Web Engines Hackfest 2018 at Igalia

Next trip was definitely closer. Just 90 minutes drive to our Igalia headquarters in A Coruña.


Igalia has been organizing this event since 2009. It is a cross-web-engine event, where engineers of Mozilla, Chromium and WebKit have been meeting yearly to do some hacking, and discuss the future of the web.

This time my main interest was participating in the discussions about the effort by Igalia and Google to support Wayland natively in Chromium. I was pleased to know around 90% of the work had already landed in upstream Chromium. Great news as it will smooth integration of Chromium for embedders using Ozone Wayland, like webOS. It was also great to know the work for improving GPU performance reducing the number of copies required for painting web contents.

Web Engines Hackfest 2018 CC BY-SA 2.0

Other topics of my interest:
– We did a follow-up of the discussion in last BlinkOn about the barriers for Chromium embedders, sharing the experiences maintaining a downstream Chromium tree.
– Joined the discussions about the future of WebKitGTK. In particular the graphics pipeline adaptation to the upcoming GTK+ 4.

As usual, the organization was great. We had 70 people in the event, and it was awesome to see all the activity in the office, and so many talented engineers in the same place. Thanks Igalia!

Web Engines Hackfest 2018 CC BY-SA 2.0

AGL All Members Meeting Europe 2018 at Dresden

The last event in barely a month was my first visit to the beautiful town of Dresden (Germany).

The goal was continuing the discussions for the projects Igalia is developing for AGL platform: Chromium upstream native Wayland support, and the WAM web runtime port. We also had a booth showcasing that work, but also our lightweight WebKit port WPE that was, as usual, attracting interest with its 60fps video playback performance in a Raspberry Pi 2.

I co-presented with Steve Lemke a talk about the automotive activities at LGSVL, taking the opportunity to update on the status of the WAM web runtime work for AGL (slides here). The project is progressing and Igalia should be landing soon the first results of the work.

Igalia booth at AGL AMM Europe 2018

It was great to meet all this people, and discuss in person the architecture proposal for the web runtime, unblocking several tasks and offering more detailed planning for next months.

Dresden was great, and I can’t help highlighting the reception and guided tour in the Dresden Transportation Museum. Great choice by the organization. Thanks to Linux Foundation and the AGL project community!

Next: Chrome Dev Summit 2018

So… what’s next? I will be visiting San Francisco in November for Chrome Dev Summit.

I can only thank Igalia for sponsoring my attendance to these events. They are quite important for keeping things moving forward. But also, it is also really nice to meet friends and collaborators. Thanks Igalia!

18 de October de 2018

How to cite bibliography ISO/IEC standards

For my final post-grade work I’m collecting bibliography and as the main work is around ISO/IEC documents I investigated how to to make a correct bibliography entry for these, which I realized is not very well known as you can check in this question in Tex.StackSchange.com.

I finally chose an style I show here as an example:

  • BibTeX:
    @techreport{iso_central_secretary_systems_2016,
      address = {Geneva, CH},
      type = {Standard},
      title = {Systems and software engineering -- {Lifecycle} profiles for {Very} {Small} {Entities} ({VSEs}) -- {Part} 1: {Overview}},
      shorttitle = {{ISO}/{IEC} {TR} 29110-1:2016},
      url = {https://www.iso.org/standard/62711.html},
      language = {en},
      number = {ISO/IEC TR 29110-1:2016},
      institution = {International Organization for Standardization},
      author = {{ISO Central Secretary}},
      year = {2016}
    }
    
  • RIS:
      TY  - RPRT
      TI  - Systems and software engineering -- Lifecycle profiles for Very Small Entities (VSEs) -- Part 1: Overview
      AU  - ISO Central Secretary
      T2  - ISO/IEC 29110
      CY  - Geneva, CH
      PY  - 2016
      LA  - en
      M3  - Standard
      PB  - International Organization for Standardization
      SN  - ISO/IEC TR 29110-1:2016
      ST  - ISO/IEC TR 29110-1:2016
      UR  - https://www.iso.org/standard/62711.html
      ER  - 
    

    I’ve using this style extensively in a development website http://29110.olea.org/. You can compare details with the official info.

    Both have been generated using Zotero.

07 de October de 2018

Banksy Shredder




PD: After some reports about male nudity this post has been edited to remove the portrait of my back. If you have reservations with male nudity PLEASE DON'T FOLLOW THE LINK.

PPD: If you don't have problems with male nudity for your convenience here you'll find the Wiki Commons category «Nude men» of pictures.

«Software Quality Assurance, First Edition» PDF file

Print ISBN:9781118501825, Online ISBN:9781119312451, DOI:10.1002/9781119312451

For your convenience I’ve compiled in just one file the book Software Quality Assurance by Claude Y. Laporte and Alain April. The book is provided for free download at the publisher website as separated files. Download the full book.

About the book: «This book introduces Software Quality Assurance (SQA) and provides an overview of standards used to implement SQA. It defines ways to assess the effectiveness of how one approaches software quality across key industry sectors such as telecommunications, transport, defense, and aerospace.»

It is licensed as Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Claude Y. Laporte is the editor of the ISO/IEC 29110 standard of software engineering for very small entities (VSE).

PD: Added licencing details.

04 de October de 2018

GUADEC 2018 by numbers

GUADEC 2018 badge

It took me a while but now I can gave you some stats from GUADEC 2018, following past year Sam’s example.

They are very rough but I hope informative.

  • Attendees: 207 (and about 215 registered), two less than 2017.
  • 9 days: 2 days boards meetings, 3 for conferences and 4 for BoFs and workshops.
  • 44 talks and videos.
  • 35 BoFs and workshops.
  • 3 great parties, including the flamenco show by «la Chinelita and group».
  • About economics just to say it was very successful. Thanks a lot to our sponsors and donnors. And special kudos to the sponsoring team for such impresive job.
  • The medium age was 32.7, from the 143 persons who provided their age. Minimum age was 15 and maxium 61.
  • The Spanish attendants were 43, 20.8% of the total. Significative presence from the UK with 20%.
  • The maximum of persons per day hosted at Civitas residence where 75. I would have expected a bigger number but some factors affected: GUADEC dates were in high season with a particular peak because some local events (at some point there were not a single room available in Almería city), other was a coincidence with a university summer course and finally many people tried to book at Civitas very late.

The attendees who filled they country of residence, by country:

country number
Argentina 1
Australia 1
Austria 2
Belgium 1
Brazil 2
Canada 3
China 3
Czech Republic 10
Denmark 2
Finland 1
France 9
Germany 11
Greece 1
India 2
Israel 1
Italy 3
Japan 1
Latvia 1
Netherlands 2
New Zealand 1
Norway 2
Romania 3
Russian Federation 2
Spain 43
Sri Lanka 1
Sweden 2
Switzerland 1
United Kingdom 41
United States 25
Unspecified 18


The GUADEC occupancy at Civitas were:

date               number
02/07/2018 1
03/07/2018 7
04/07/2018 21
05/07/2018 66
06/07/2018 70
07/07/2018 71
08/07/2018 75
09/07/2018 70
10/07/2018 63
11/07/2018 40
12/07/2018 5
13/07/2018 1


Thanks Benjamin Berg for helping to collect the data.

Thank you all for visiting us in Almería. Don’t forget to come back :-)

PD: post edited at 2018/11/15 adding details of residence occupancy.

02 de October de 2018

Wacom's graphic tablet sizes (2)

In a previous entry I put the data I’ve collected about Wacom digitizer tablets. Collecting the data took to me more time I really wished. But now I’m happy to publish an exhaustive list thanks to Carlos Garnacho:

model active area size mm active area size in
Wacom ISDv4 E2 356 ✕ 203 mm 14 ✕ 8 in
Wacom Intuos BT M 254 ✕ 203 mm 10 ✕ 8 in
Wacom ISDv4 104 229 ✕ 127 mm 9 ✕ 5 in
Wacom Intuos BT S 203 ✕ 152 mm 8 ✕ 6 in
Wacom Intuos M 254 ✕ 203 mm 10 ✕ 8 in
Wacom Intuos S 203 ✕ 152 mm 8 ✕ 6 in
Wacom ISDv4 5110 305 ✕ 178 mm 12 ✕ 7 in
Wacom Bamboo Pen medium 152 ✕ 102 mm 6 ✕ 4 in
Wacom Bamboo Fun medium (2+FG) 229 ✕ 127 mm 9 ✕ 5 in
Wacom DTU-2231 483 ✕ 279 mm 19 ✕ 11 in
Wacom Bamboo Pen small 152 ✕ 102 mm 6 ✕ 4 in
Wacom Cintiq 21UX2 432 ✕ 330 mm 17 ✕ 13 in
Wacom Graphire Wireless 203 ✕ 152 mm 8 ✕ 6 in
ELAN 2537 356 ✕ 203 mm 14 ✕ 8 in
Wacom ISDv4 5002 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 5000 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 485e 254 ✕ 178 mm 10 ✕ 7 in
Wacom Bamboo (2+FG) 127 ✕ 76 mm 5 ✕ 3 in
Wacom DTH1152 229 ✕ 127 mm 9 ✕ 5 in
Wacom Bamboo Fun small (2+FG) 152 ✕ 102 mm 6 ✕ 4 in
Wacom Bamboo Pen & Touch (2+FG) 152 ✕ 102 mm 6 ✕ 4 in
Wacom Cintiq 22HD touch 483 ✕ 279 mm 19 ✕ 11 in
Wacom DTI520UB/L 356 ✕ 305 mm 14 ✕ 12 in
Wacom Bamboo Fun medium (2FG) 229 ✕ 127 mm 9 ✕ 5 in
Wacom Bamboo Fun small (2FG) 152 ✕ 102 mm 6 ✕ 4 in
Wacom Bamboo (2FG) 152 ✕ 102 mm 6 ✕ 4 in
Wacom Bamboo Touch (2FG) 127 ✕ 76 mm 5 ✕ 3 in
Huion H610 Pro 254 ✕ 152 mm 10 ✕ 6 in
Bamboo One 127 ✕ 102 mm 5 ✕ 4 in
Wacom Bamboo Pen 152 ✕ 102 mm 6 ✕ 4 in
Wacom Intuos BT M 254 ✕ 203 mm 10 ✕ 8 in
Wacom ISDv4 12C 279 ✕ 152 mm 11 ✕ 6 in
Wacom Intuos BT S 203 ✕ 152 mm 8 ✕ 6 in
Wacom Intuos4 WL 203 ✕ 127 mm 8 ✕ 5 in
Wacom Intuos4 12x19 483 ✕ 305 mm 19 ✕ 12 in
Wacom Intuos4 8x13 330 ✕ 203 mm 13 ✕ 8 in
Wacom ISDv4 5146 305 ✕ 178 mm 12 ✕ 7 in
Wacom Cintiq Pro 13 305 ✕ 178 mm 12 ✕ 7 in
Wacom MobileStudio Pro 16 356 ✕ 203 mm 14 ✕ 8 in
Wacom MobileStudio Pro 13 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 484c 254 ✕ 178 mm 10 ✕ 7 in
Wacom DTU-1931 381 ✕ 305 mm 15 ✕ 12 in
Wacom Cintiq 12WX 254 ✕ 178 mm 10 ✕ 7 in
Wacom Cintiq 20WSX 432 ✕ 279 mm 17 ✕ 11 in
Wacom Cintiq 21UX 432 ✕ 330 mm 17 ✕ 13 in
Wacom ISDv4 4004 279 ✕ 152 mm 11 ✕ 6 in
Wacom Cintiq Pro 16 356 ✕ 203 mm 14 ✕ 8 in
Wacom DTF-720 330 ✕ 279 mm 13 ✕ 11 in
Wacom Intuos Pro 2 L 305 ✕ 203 mm 12 ✕ 8 in
Wacom Intuos Pro 2 M 229 ✕ 152 mm 9 ✕ 6 in
Wacom DTH2242 483 ✕ 279 mm 19 ✕ 11 in
Wacom ISDv4 5099 254 ✕ 178 mm 10 ✕ 7 in
Wacom DTK2241 483 ✕ 279 mm 19 ✕ 11 in
Wacom Cintiq Pro 32 686 ✕ 381 mm 27 ✕ 15 in
Wacom Cintiq Pro 24 PT 508 ✕ 305 mm 20 ✕ 12 in
Wacom Intuos3 12x19 483 ✕ 305 mm 19 ✕ 12 in
Wacom Intuos4 6x9 229 ✕ 152 mm 9 ✕ 6 in
Wacom Intuos4 4x6 152 ✕ 102 mm 6 ✕ 4 in
Wacom Intuos Pro 2 L WL 305 ✕ 203 mm 12 ✕ 8 in
Wacom Intuos Pro 2 M WL 229 ✕ 152 mm 9 ✕ 6 in
Wacom Intuos3 4x6 152 ✕ 102 mm 6 ✕ 4 in
Wacom Intuos3 6x8 203 ✕ 152 mm 8 ✕ 6 in
Intuos Pen & Touch Medium 229 ✕ 127 mm 9 ✕ 5 in
Wacom ISDv4 5013 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 5014 254 ✕ 152 mm 10 ✕ 6 in
Wacom Intuos4 WL 203 ✕ 127 mm 8 ✕ 5 in
Intuos Pen Medium 229 ✕ 127 mm 9 ✕ 5 in
Intuos Pen & Touch Small 152 ✕ 102 mm 6 ✕ 4 in
Intuos Pen Small 152 ✕ 102 mm 6 ✕ 4 in
Wacom ISDv4 50f8 356 ✕ 203 mm 14 ✕ 8 in
Huion H610 Pro 254 ✕ 152 mm 10 ✕ 6 in
Wacom ISDv4 504a 305 ✕ 178 mm 12 ✕ 7 in
Wacom Intuos3 6x11 279 ✕ 152 mm 11 ✕ 6 in
Wacom Intuos3 12x12 305 ✕ 305 mm 12 ✕ 12 in
Wacom Intuos3 9x12 305 ✕ 229 mm 12 ✕ 9 in
XP-Pen Star 03 254 ✕ 152 mm 10 ✕ 6 in
Wacom ISDv4 50f1 305 ✕ 178 mm 12 ✕ 7 in
Wacom Intuos3 4x5 127 ✕ 102 mm 5 ✕ 4 in
Wacom DTK1651 356 ✕ 203 mm 14 ✕ 8 in
Wacom ISDv4 10D 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 10F 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 10E 279 ✕ 152 mm 11 ✕ 6 in
Wacom Intuos2 12x18 457 ✕ 305 mm 18 ✕ 12 in
Wacom Intuos2 12x12 305 ✕ 305 mm 12 ✕ 12 in
Wacom Intuos2 9x12 305 ✕ 229 mm 12 ✕ 9 in
Wacom Intuos2 6x8 203 ✕ 152 mm 8 ✕ 6 in
Wacom Intuos2 4x5 127 ✕ 102 mm 5 ✕ 4 in
Wacom DTU1031X 229 ✕ 127 mm 9 ✕ 5 in
Wacom Cintiq 27QHD 610 ✕ 305 mm 24 ✕ 12 in
Wacom ISDv4 503E 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 117 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 116 203 ✕ 152 mm 8 ✕ 6 in
Wacom ISDv4 503F 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 50b8 305 ✕ 178 mm 12 ✕ 7 in
Wacom Cintiq 27QHD touch 610 ✕ 305 mm 24 ✕ 12 in
Wacom ISDv4 50b6 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 50b4 305 ✕ 178 mm 12 ✕ 7 in
Wacom Intuos5 M 229 ✕ 152 mm 9 ✕ 6 in
Wacom DTU1141 229 ✕ 127 mm 9 ✕ 5 in
Wacom ISDv4 5048 254 ✕ 152 mm 10 ✕ 6 in
Wacom Cintiq 13HD touch 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 5044 254 ✕ 152 mm 10 ✕ 6 in
Wacom ISDv4 4831 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 5040 305 ✕ 178 mm 12 ✕ 7 in
Huion H610 Pro 254 ✕ 152 mm 10 ✕ 6 in
Dell Canvas 27 584 ✕ 330 mm 23 ✕ 13 in
Wacom Cintiq Companion 2 305 ✕ 178 mm 12 ✕ 7 in
Wacom Cintiq 22HD 483 ✕ 279 mm 19 ✕ 11 in
Wacom ISDv4 101 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 100 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 481a 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 93 254 ✕ 152 mm 10 ✕ 6 in
Wacom DTU1031 229 ✕ 127 mm 9 ✕ 5 in
Wacom Intuos5 touch L 330 ✕ 203 mm 13 ✕ 8 in
N-Trig Pen 254 ✕ 152 mm 10 ✕ 6 in
Wacom ISDv4 509D 305 ✕ 178 mm 12 ✕ 7 in
Wacom Intuos5 S 152 ✕ 102 mm 6 ✕ 4 in
Wacom Intuos5 touch S 152 ✕ 102 mm 6 ✕ 4 in
Wacom Intuos5 touch M 229 ✕ 152 mm 9 ✕ 6 in
Intuos Pen Medium 229 ✕ 127 mm 9 ✕ 5 in
Wacom ISDv4 4824 102 ✕ 178 mm 4 ✕ 7 in
Wacom Intuos 12x18 457 ✕ 305 mm 18 ✕ 12 in
Wacom ISDv4 4822 279 ✕ 152 mm 11 ✕ 6 in
Wacom Intuos 12x12 305 ✕ 305 mm 12 ✕ 12 in
Wacom Intuos 9x12 305 ✕ 229 mm 12 ✕ 9 in
Wacom Intuos 6x8 203 ✕ 152 mm 8 ✕ 6 in
Wacom Intuos 4x5 127 ✕ 102 mm 5 ✕ 4 in
Wacom ISDv4 90 305 ✕ 203 mm 12 ✕ 8 in
Wacom Cintiq 24HD touch 533 ✕ 330 mm 21 ✕ 13 in
Intuos Pen Small 152 ✕ 102 mm 6 ✕ 4 in
Wacom Bamboo Pad 102 ✕ 76 mm 4 ✕ 3 in
Wacom ISDv4 124 229 ✕ 127 mm 9 ✕ 5 in
Wacom Cintiq Companion 305 ✕ 178 mm 12 ✕ 7 in
Wacom Bamboo Pad Wireless 102 ✕ 76 mm 4 ✕ 3 in
Wacom DTH2452 508 ✕ 305 mm 20 ✕ 12 in
Wacom Cintiq Pro 24 P 508 ✕ 305 mm 20 ✕ 12 in
Wacom ISDv4 93 254 ✕ 152 mm 10 ✕ 6 in
Wacom Graphire 127 ✕ 102 mm 5 ✕ 4 in
Wacom ISDv4 90 305 ✕ 203 mm 12 ✕ 8 in
Wacom Intuos Pro L 330 ✕ 203 mm 13 ✕ 8 in
Wacom Intuos Pro M 229 ✕ 152 mm 9 ✕ 6 in
Wacom Intuos Pro S 152 ✕ 102 mm 6 ✕ 4 in
Wacom Graphire2 4x5 127 ✕ 102 mm 5 ✕ 4 in
Wacom ISDv4 4814 254 ✕ 178 mm 10 ✕ 7 in
Wacom Graphire4 4x5 127 ✕ 102 mm 5 ✕ 4 in
Wacom Graphire3 6x8 203 ✕ 152 mm 8 ✕ 6 in
Wacom Graphire3 4x5 127 ✕ 102 mm 5 ✕ 4 in
Wacom Graphire2 5x7 178 ✕ 127 mm 7 ✕ 5 in
One by Wacom (medium) 229 ✕ 127 mm 9 ✕ 5 in
One by Wacom (small) 152 ✕ 102 mm 6 ✕ 4 in
Wacom Cintiq 24HD 533 ✕ 330 mm 21 ✕ 13 in
Wacom DTU-1631 356 ✕ 203 mm 14 ✕ 8 in
Wacom Bamboo Special Edition Pen & Touch medium 229 ✕ 127 mm 9 ✕ 5 in
Wacom Bamboo Create 152 ✕ 102 mm 6 ✕ 4 in
Wacom ISDv4 114 229 ✕ 127 mm 9 ✕ 5 in
Wacom DTK2451 508 ✕ 305 mm 20 ✕ 12 in
Wacom Bamboo Capture 152 ✕ 102 mm 6 ✕ 4 in
Wacom Bamboo Connect 152 ✕ 102 mm 6 ✕ 4 in
Wacom Bamboo 16FG 4x5 152 ✕ 102 mm 6 ✕ 4 in
Wacom ISDv4 5090 279 ✕ 152 mm 11 ✕ 6 in
Wacom Bamboo Special Edition Pen & Touch small 152 ✕ 102 mm 6 ✕ 4 in
Wacom Cintiq Companion Hybrid 305 ✕ 178 mm 12 ✕ 7 in
Wacom ISDv4 4809 102 ✕ 178 mm 4 ✕ 7 in
XP-Pen Star 03 254 ✕ 152 mm 10 ✕ 6 in
Huion H610 Pro 254 ✕ 152 mm 10 ✕ 6 in
Wacom Cintiq 13HD 305 ✕ 178 mm 12 ✕ 7 in
Intuos Pen & Touch Medium 229 ✕ 127 mm 9 ✕ 5 in
Intuos Pen & Touch Small 152 ✕ 102 mm 6 ✕ 4 in
One by Wacom (medium) 229 ✕ 127 mm 9 ✕ 5 in
One by Wacom (small) 152 ✕ 102 mm 6 ✕ 4 in
Wacom ISDv4 5010 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 E6 279 ✕ 152 mm 11 ✕ 6 in
Wacom ISDv4 E5 279 ✕ 152 mm 11 ✕ 6 in


As a reference these are the standards DIN sizes comparable with those models:

DIN type   size
A4 210 x 297 mm
A5 148 x 210 mm
A6 105 x 148 mm


The source of this data is directly from the Wacom driver and is extracted with this C program Carlos provided:


/* Build with:
 *   gcc -o wacomfoo `pkg-config --libs --cflags libwacom` wacomfoo.c
 */
 
#define IN_TO_MM 25.4
 
#include <libwacom/libwacom.h>
 
int
main (int argc, char *argv[])
{
  const WacomDeviceDatabase *db;
  WacomDevice **devices;
  int i;
 
  db = libwacom_database_new ();
  devices = libwacom_list_devices_from_database (db, NULL);
  printf ("| model | active area size mm | active area size in | \n");
  printf ("|:--------- |:--------- |:--------- | \n");

  for (i = 0; devices[i] != NULL; i++)
    {
      if (libwacom_get_width (devices[i]) == 0)
        continue;
      printf ("| %s | %.f ✕ %.f mm | %.f ✕ %.f in | \n",
              libwacom_get_name (devices[i]),
              (double) libwacom_get_width (devices[i]) * IN_TO_MM,
              (double) libwacom_get_height (devices[i]) * IN_TO_MM,
              (double) libwacom_get_width (devices[i]),
              (double) libwacom_get_height (devices[i]));
    }
  return 0;
}

PD: Fixed the correct value for milimeters per inch.

23 de September de 2018

Wacom's graphic tablet sizes

For some reasons I’ve been looking for second hand Wacom graphic tablets. To me has been annoying to find out which size is for each model. So I’m writing here the list of the models I gathered.

The reason for looking only for Wacoms is because these days seem to be very well supported in Linux, at least the old models you can get second hand.

model active area size
CTL460 147,2 x 92,0 mm
CTL 420 127.6 x 92.8 mm
CTE-430 Graphire 3 127 x 101 mm
CTF-430 127.6 x 92.8 mm
CTL 460 147,2 x 92,0 mm
CTH-460 147,2 x 92,0 mm
CTH-461 147,2 x 92,0 mm
CTH-470 147,2 x 92,0 mm
CTL-470 147,2 x 92,0 mm
CTL-480 Intuos 152 x 95 mm
CTE-640 208.8 x 150.8 mm
CTE-650 216.5 x 135.3 mm
CTH-661 215.9 x 137.16 mm
CTH-670 217 x 137 mm
ET-0405A-U 127 x 106 mm
Graphire 2 127.6 x 92.8 mm
Intuos 2 127.6 x 92.8 mm (probably)
Volito 2 127.6 x 92.8 mm


As a reference these are the standards DIN sizes comparable with those models:

DIN type   size
A4 210 x 297 mm
A5 148 x 210 mm
A6 105 x 148 mm

If you find any typo or want to add other models feel free to comment.

PD: This post has been obsoleted by a new entry.