March 25, 2017

“Harry Dresden – Mago”

Harry Dresden es el único mago que puedes encontrar en la guía telefónica. Vas a la sección “Wizards” de la guía de Chicago y no aparece ninguno más. Sus especialidades son las investigaciones paranormales y encontrar objetos perdidos. No hace trucos, no actúa en fiestas ni hace pociones de amor. Es un mago serio. Un profesional.

“What’s the sign on the door say?”

“It says Harry Dresden. Wizard.’ “

“That’s me,” I confirmed.

(Storm Front)

Su nombre completo es Harry Blackstone Copperfield Dresden, pero mejor que no lo uses en vano. Los nombres son poderosos, y si supieras cómo pronunciarlo correctamente tendrías poder sobre él.

Dresden tiene una oficina en el midtown de Chicago. Le cuesta llegar a final de mes, porque los trabajos legales para un mago son limitados. Vive en el sótano de una vieja casa, lleno de libros y cubierto por alfombras y muebles de segunda mano. Nada de electrónica: ni TV, ni ordenador, ni Internet. La magia y la electrónica se llevan mal: un mal gesto, un pensamiento distraído, y algo podría hacer “puf” y empezar a arder. Un gato enorme, Mister, honra a Harry viviendo con él y dejándole que le alimente. Los gatos son así, y Mister es todo un señor gato.

El otro compañero de piso de Harry Dresden es Bob. A Bob le encantan las novelas eróticas. Tiene un montón de ellas en su estantería, gastadas de tanto uso. Están al lado de su casa, una calavera humana de varios cientos de años. Bob es un espíritu del conocimiento, un tanto impertinente, lascivo y listillo, pero que sabe todo o casi todo lo que hay que saber sobre la magia y el mundo mágico.

Otra constante de la vida reciente de Harry Dresden es Karrin Murphy, directora del departamento de Investigaciones Especiales de la policía de Chicago. “Investigaciones Especiales” significa que IS se encarga de todo lo que es … raro. Dresden es el consultor de temas sobrenaturales del IS, lo que le viene muy bien para pagar el alquiler.

Vampire attacks, troll mauraudings, and faery abductions of children didn’t fit in very neatly on a police report—but at the same time, people got attacked, infants got stolen, property was damaged or destroyed. And someone had to look into it.

(Storm Front)

Aparte de estar sin blanca y sufrir el desdén de los muggles (perdón por usar una palabra de la saga de otro mago llamado Harry), hubo un incidente mortal en el que Harry estuvo involucrado cuando era un chaval. Nada fuera de lo común: su mentor intentó asesinarlo, y Harry usó sus poderes para matarlo, prendiéndole fuego a la casa en la que estaban. Harry tiene un don para la magia elemental de fuego; y aunque otros magos pueden manejarla de forma sutil, ése no es uno de los fuertes de Harry Dresden.

That plan did have a lot of words like assault and smash and blast in it, which I had to admit was way more my style.

(Ghost Story)

Desde aquel momento, el Concilio Blanco (el consejo de magos que vigila que ninguno se salga de madre, y castiga a los que lo hacen) le vigila de cerca. En el mundo de los magos, eso es como tener antecedentes criminales; con la salvedad de que, si hubiera otra ofensa, no habría juicio: el ofensor sería ejecutado. Y Morgan, que es uno de los ejecutores del Concilio y parece Sean Connery en “Los Inmortales” (espada terriblemente afilada incluida), está deseando que eso ocurra.

Ah, y además, la madrina de Harry es una de las más poderosas de la aristocracia de las hadas, y quiere convertirlo en uno de sus sabuesos. Pero por lo demás, su vida es más o menos normal y corriente.

Jim Butcher

El creador de Harry Dresden es Jim Butcher. Había escrito tres novelas desde que, con sólo diecinueve años, decidió ser escritor profesional; pero ninguna de ellas fue publicada. Se matriculó en un curso de escritura en 1996, y uno de sus “deberes” fue el escribir algo parecido a Anita Blake: Vampire Hunter. Butcher lo hizo, siguiendo las instrucciones de su profesora Deborah Chester, pero sin mucha convicción.

When I finally got tired of arguing with her and decided to write a novel as if I was some kind of formulaic, genre writing drone, just to prove to her how awful it would be, I wrote the first book of the Dresden Files.

Aún hubo de esperar más de dos años antes de que lo que había nacido como Semiautomagic y ahora se llamaría Storm Front, la primera novela de Harry Dresden, fuera publicada. Durante ese tiempo escribió la segunda novela, Fool Moon, y empezó la tercera, Grave Peril.

Desde el 2000, año de debut de Harry Dresden, Butcher ha publicado otros 14 volúmenes con sus aventuras. Para no hacer spoilers, sólo diré que lo que empezó siendo una serie de aventuras independientes empezó a complicarse tras los primeros libros, y ahora Harry Dresden tiene mucho más por lo que luchar, y mucho más que perder.

Un mago hardboiled

El estilo de los libros de Harry Dresden recuerda a la novela negra y detectives hardboiled creados por autores como Dashiell Hammet o Raymond Chandler: están contados en primera persona y el protagonista es un cínico anti-héroe (alguien mal visto en la sociedad tradicional) que está de vuelta de todo.

Pero, a diferencia de Sam Spade y sus parientes, Harry Dresden te hace reir. No sólo le ocurren cosas graciosas, sino que su sarcasmo es ocurrente y divertido. En uno de los libros, uno de sus enemigos le hace prisionero y le subasta por eBay. Harry le sugiere que ponga en la descripción del artículo “un Harry Dresden, seminuevo”.

Los libros son cortos, llenos de acción. Recuerdan a las novelas pulp, en las que no hay más de cinco páginas seguidas sin que pase algo: un disparo, una sorpresa, un ataque de vampiros (es Harry Dresden, al fin y al cabo). En todos los libros se siembran pistas que acaban floreciendo al final, cuando todo se revela y el Bien triunfa … casi siempre.

Harry Dresden es como su homónimo británico, pero con más chispa, más mala leche y más chicas guapas. Los buenos no son buenos del todo, y los malos a veces hacen cosas buenas. Hay magos, hadas y monstruos. ¿Qué puede no gustar de todo esto?

por xouba el March 25, 2017 10:59 AM

March 20, 2017

Media Source Extensions upstreaming, from WPE to WebKitGTK+

A lot of good things have happened to the Media Source Extensions support since my last post, almost a year ago.

The most important piece of news is that the code upstreaming has kept going forward at a slow, but steady pace. The amount of code Igalia had to port was pretty big. Calvaris (my favourite reviewer) and I considered that the regular review tools in WebKit bugzilla were not going to be enough for a good exhaustive review. Instead, we did a pre-review in GitHub using a pull request on my own repository. It was an interesting experience, because the change set was so large that it had to be (artificially) divided in smaller commits just to avoid reaching GitHub diff display limits.

394 GitHub comments later, the patches were mature enough to be submitted to bugzilla as child bugs of Bug 157314 – [GStreamer][MSE] Complete backend rework. After some comments more in bugzilla, they were finally committed during Web Engines Hackfest 2016:

Some unforeseen regressions in the layout tests appeared, but after a couple of commits more, all the mediasource WebKit tests were passing. There are also some other tests imported from W3C, but I kept them still skipped because webm support was needed for many of them. I’ll focus again on that set of tests at its due time.

Igalia is proud of having brought the MSE support up to date to WebKitGTK+. Eventually, this will improve the browser video experience for a lot of users using Epiphany and other web browsers based on that library. Here’s how it enables the usage of YouTube TV at 1080p@30fps on desktop Linux:

Our future roadmap includes bugfixing and webm/vp9+opus support. This support is important for users from countries enforcing patents on H.264. The current implementation can’t be included in distros such as Fedora for that reason.

As mentioned before, part of this upstreaming work happened during Web Engines Hackfest 2016. I’d like to thank our sponsors for having made this hackfest possible, as well as Metrological for giving upstreaming the importance it deserves.

Thank you for reading.

 

por eocanha el March 20, 2017 11:55 AM

March 08, 2017

Nova xunta directiva

GPUL ten un novo equipo directivo desde o pasado luns 6 de Marzo de 2017, con Saúl González á cabeza como o séptimo presidente nos xa máis de dezanove anos de vida da asociación. Esta é a nova Xunta Directiva que desde xa toma as rendas:

Presidente: Saúl González Eiros

Vicepresidente: Javier Vila Besada

Secretario: Pedro Costal Millán

Tesoureiro: Bruno Cabado Lousa

Vogais:
David Maseda Neira

Santiago Saavedra López

Pablo Castro Valiño

AdjuntoTamaño
Presidente18.89 KB
Vicepresidente11.35 KB
Secretario19.16 KB
Tesoureiro31.08 KB
Vogal15.82 KB
Vogal17.42 KB
Vogal22.93 KB

por gpul el March 08, 2017 11:35 AM

February 21, 2017

GPUL Labs 2017

GPUL Labs naceu o ano pasado co obxectivo de xuntar a desenvolvedores e membros da comunidade maker para aprender novas tecnoloxías e realizar proxectos reais de software íntegramente con Software Libre, contribuindo a comunidade e adquirindo experiencia ao mesmo tempo.

O evento foi un rotundo éxito durante os 3 meses que durou, nun total de 11 charlas sobre tecnoloxías libres impartidas por poñentes das máis punteiras empresas e asociacións do entorno galego así como nos 2 hackathons realizados dos que xurdieron máis de 10 proxectos diferentes.

A repercusión dos GPUL Labs extendeuse por todo o territorio autonómico contando con máis de 450 asistentes, 3 sponsors internacionales e 8 empresas colaboradoras.

É por isto que este ano queremos repetir a experiencia e seguir medrando, podedes ver máis sobre os GPUL Labs na web https://labs.gpul.org/ e se queredes dar unha charla, colaborar ou patrocinar, pasádevos polo seguinte repositorio https://github.com/gpul-labs/labs2017/ Contamos coa vosa asistencia para montar unha enorme e activa comunidade de Software Libre na Coruña :)

por gpul el February 21, 2017 12:05 AM

February 15, 2017

Asambleas Ordinaria e Extraordinaria de GPUL

Pola presente, convócase Asamblea Ordinaria de GPUL para o luns 6 de marzo de 2017 na Aula de Graos da Facultade de Informática.

Primeira convocatoria: 19:30
Segunda convocatoria: 20:00

Orde do día:

- Lectura e aprobación, se procede, da Acta da Asemblea anterior.
- Lectura de altas e baixas de socios desde a última Asemblea.
- Lectura e aprobación, se procede, das Contas de 2014.
- Lectura e aprobación, se procede, das Contas de 2015.
- Lectura e aprobación, se procede, das Contas de 2016.
- Estado das Contas de 2017.
- Lectura e aprobación se procede da memoria de actividades a realizar no 2017.
- Discusión e aprobación se procede para cambiar a conta da asociación para outra entidade bancaria.
- Rogos e preguntas.

Pola presente, convócase Asamblea Extraordinaria de GPUL para o luns 6 de marzo de 2017 na Aula de Graos da Facultade de Informática.

Primeira convocatoria: 20:30
Segunda convocatoria: 21:00

Orde do día:
- Inicio da votación á Xunta Directiva.
- Reconto de votos.
- Nomeamento da nova Xunta Directiva.
- Rogos e preguntas.

ACTUALIZACIÓN
- A asamblea realizarase na Aula 2.0a da FIC.
- Detectouse un erro tipográfico no horario da asamblea extraordinaria que se solucionou no comezo da asamblea, adiantando en 30 minutos o horario de ambas convocatorias.

Adxúntase copia da Acta da última Asamblea para a súa revisión por parte dos socios e futuros asistentes.

Asdo.
Pablo Castro,
Secretario do GPUL.

AdjuntoTamaño
acta.pdf53.3 KB

por gpul el February 15, 2017 11:12 PM

February 08, 2017

QEMU and the qcow2 metadata checks

When choosing a disk image format for your virtual machine one of the factors to take into considerations is its I/O performance. In this post I’ll talk a bit about the internals of qcow2 and about one of the aspects that can affect its performance under QEMU: its consistency checks.

As you probably know, qcow2 is QEMU’s native file format. The first thing that I’d like to highlight is that this format is perfectly fine in most cases and its I/O performance is comparable to that of a raw file. When it isn’t, chances are that this is due to an insufficiently large L2 cache. In one of my previous blog posts I wrote about the qcow2 L2 cache and how to tune it, so if your virtual disk is too slow, you should go there first.

I also recommend Max Reitz and Kevin Wolf’s qcow2: why (not)? talk from KVM Forum 2015, where they talk about a lot of internal details and show some performance tests.

qcow2 clusters: data and metadata

A qcow2 file is organized into units of constant size called clusters. The cluster size defaults to 64KB, but a different value can be set when creating a new image:

qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

Clusters can contain either data or metadata. A qcow2 file grows dynamically and only allocates space when it is actually needed, so apart from the header there’s no fixed location for any of the data and metadata clusters: they can appear mixed anywhere in the file.

Here’s an example of what it looks like internally:

In this example we can see the most important types of clusters that a qcow2 file can have:

Metadata overlap checks

In order to detect corruption when writing to qcow2 images QEMU (since v1.7) performs several sanity checks. They verify that QEMU does not try to overwrite sections of the file that are already being used for metadata. If this happens, the image is marked as corrupted and further access is prevented.

Although in most cases these checks are innocuous, under certain scenarios they can have a negative impact on disk write performance. This depends a lot on the case, and I want to insist that in most scenarios it doesn’t have any effect. When it does, the general rule is that you’ll have more chances of noticing it if the storage backend is very fast or if the qcow2 image is very large.

In these cases, and if I/O performance is critical for you, you might want to consider tweaking the images a bit or disabling some of these checks, so let’s take a look at them. There are currently eight different checks. They’re named after the metadata sections that they check, and can be divided into the following categories:

  1. Checks that run in constant time. These are equally fast for all kinds of images and I don’t think they’re worth disabling.
    • main-header
    • active-l1
    • refcount-table
    • snapshot-table
  2. Checks that run in variable time but don’t need to read anything from disk.
    • refcount-block
    • active-l2
    • inactive-l1
  3. Checks that need to read data from disk. There is just one check here and it’s only needed if there are internal snapshots.
    • inactive-l2

By default all tests are enabled except for the last one (inactive-l2), because it needs to read data from disk.

Disabling the overlap checks

Tests can be disabled or enabled from the command line using the following syntax:

-drive file=hd.qcow2,overlap-check.inactive-l2=on
-drive file=hd.qcow2,overlap-check.snapshot-table=off

It’s also possible to select the group of checks that you want to enable using the following syntax:

-drive file=hd.qcow2,overlap-check.template=none
-drive file=hd.qcow2,overlap-check.template=constant
-drive file=hd.qcow2,overlap-check.template=cached
-drive file=hd.qcow2,overlap-check.template=all

Here, none means that no tests are enabled, constant enables all tests from group 1, cached enables all tests from groups 1 and 2, and all enables all of them.

As I explained in the previous section, if you’re worried about I/O performance then the checks that are probably worth evaluating are refcount-block, active-l2 and inactive-l1. I’m not counting inactive-l2 because it’s off by default. Let’s look at the other three:

Conclusion

The qcow2 consistency checks are useful to detect data corruption, but they can affect write performance.

If you’re unsure and you want to check it quickly, open an image with overlap-check.template=none and see for yourself, but remember again that this will only affect write operations. To obtain more reliable results you should also open the image with cache=none in order to perform direct I/O and bypass the page cache. I’ve seen performance increases of 50% and more, but whether you’ll see them depends a lot on your setup. In many cases you won’t notice any difference.

I hope this post was useful to learn a bit more about the qcow2 format. There are other things that can help QEMU perform better, and I’ll probably come back to them in future posts, so stay tuned!

Acknowledgments

My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the rest of the QEMU development team.

por berto el February 08, 2017 08:52 AM

February 02, 2017

Going to FOSDEM!

It’s been two years since the last time I went to FOSDEM, but it seems that this year I’m going to be there again and, after having traveled to Brussels a few times already by plane and train, this year I’m going by car!: from Staines to the Euro tunnel and then all the way up to Brussels. Let’s see how it goes.

FOSDEM 2017

As for the conference, I don’t have any particular plan other than going to some keynotes and probably spending most of my time in the Distributions and the Desktops devrooms. Well, and of course joining other GNOME people at A La Bécasse, on Saturday night.

As you might expect, I will have my Endless laptop with me while in the conference, so feel free to come and say “hi” in case you’re curious or want to talk about that if you see me around.

At the moment, I’m mainly focused on developing and improving our flatpak story, how we deliver apps to our users via this wonderful piece of technology and how the overall user experience ends up being, so I’d be more than happy to chat/hack around this topic and/or about how we integrate flatpak in EndlessOS, the challenges we found, the solutions we implemented… and so forth.

That said, flatpak is one of my many development hats in Endless, so be sure I’m open to talk about many other things, including not work-related ones, of course.

Now, if you excuse me, I have a bag to prepare, an English car to “adapt” for the journey ahead and, more importantly, quite some hours to sleep. Tomorrow it will be a long day, but it will be worth it.

See you at FOSDEM!

por mario el February 02, 2017 10:05 PM

January 24, 2017

Convocatoria de Eleccións a Xunta Directiva

Pola presente, convócanse eleccións á Xunta Directiva do GPUL polas seguintes razóns:

- A petición do Presidente.
- Por teren transcorrido vintecatro meses desde a última convocatoria de eleccións á Xunta Directiva.

Segundo o Regulamento Electoral (adxunto), a partir de mañá, ábrese o prazo para presentar candidaturas. O calendario electoral queda da seguinte maneira:

Data de convocatoria: 26/01/2017
Presentación de candidaturas: 27/01/2017 a 08/02/2017
Publicación do listado provisional de candidaturas: 09/02/2017
Prazo para reclamacións: 09/02/2017 a 13/02/2017
Publicación do listado definitivo de candidaturas: 15/02/2017

Inicio da campaña electoral: 16/02/2017
Votación electrónica:
Solicitude: 13/02/2017 a 17/02/2017
Recepción de votos: 20/02/2017 ata 6 horas antes da primeira convocatoria da Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día.
Votación por correo ordinario:
Solicitude: 27/01/2017 a 03/02/2017
Envío de papeletas: 15/02/2017 a 17/02/2017
Recepción de votos: 15/02/2017 ata 6 horas antes da primeira convocatoria da Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día.

Convocatoria de Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día: 15/02/2017
Celebración de Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día: 06/03/2017 a 14/03/2017

Para a votación electrónica só se admitirá o certificado dixital da FNMT.

Dende a actual Xunta Directiva animamos a todas as socias e socios a participar no proceso.

Pablo Castro Valiño
Secretario do GPUL

por castrinho8 el January 24, 2017 08:55 PM

December 31, 2016

Cómo usar un teclado ANSI en español

En un arrebato, estas navidades me he comprado un teclado Magicforce de 68 teclas.

Mi tesoro teclado

Es un teclado mecánico y cuesta unos 60€, de ahí su encanto. Los teclados mecánicos suelen ser caros, y aunque éste cuesta más de lo que pagarías por un teclado "estándar" (es decir, un asco de teclado), no es demasiado para lo que podrías pagar por un teclado mecánico de otras marcas (alrededor de 100€). Aparte, no sé tú, pero yo apenas uso el teclado numérico, y quería un teclado sin él.

Pero hay un problema: la distribución del teclado no es ISO, sino ANSI. Para entendernos, el ISO es el formato al que estamos acostumbrados en Europa, con una tecla "Enter" que ocupa dos filas y la tecla para los símbolos "mayor que" y "menor que" a la derecha de un "Shift" izquierdo de tamaño mínimo. La distribución ANSI es el normal en los EEUU, y tiene algunas diferencias: la tecla ENTER ocupa sólo una fila, y hay dos teclas para "mayor que" y "menor que" donde nosotros tenemos las teclas para el punto y la coma.

A continuación un esquema, para que se vea la diferencia.

Fuente: wooting.nl


Hay varias versiones de teclados ANSI, para diferentes idiomas, como hay en ISO. Un teclado ISO puede ser español, francés, alemán o de otros idiomas europeos; un teclado ANSI puede ser inglés de los EEUU o inglés de UK (son ligeramente diferentes), por ejemplo. En mi caso es un ANSI americano, como todos los Magicforce.

Escribir en un idioma distinto al inglés de los EEUU en un teclado de éstos es posible, aunque hace falta adaptarse. Hay dos opciones: usar un mapa en español, y mapear de forma manual las teclas que son diferentes, o aprender a usar un mapa americano.

Teclado ANSI y mapa en español

Antes de seguir, un breve repaso de cómo funciona un teclado de ordenador. Tu teclado no envía letras ni números, sino códigos (scancodes). Linux convierte esos códigos a otros, los keycodes, que son interpretados según un mapa de teclado para convertirlos en los caracteres que acabamos viendo en la pantalla.

Se puede usar un mapa de español sobre un teclado ANSI. Sólo tienes que cargarlo en consola con loadkeys o en X con setxkbmap. Lo malo es que las teclas estarán cambiadas de sitio: donde pone un punto y coma estará la eñe, los paréntesis estarán una tecla más a la izquierda de lo que pone en el teclado, y las comillas estarán donde pone que está la arroba, entre otros cambios. Incluso para esto hay solución: los teclados Magicforce usan keycaps (tapas de teclas) estándar, y hay muchos juegos de recambio en Internet que podemos usar para alterar su aspecto y, de paso, poner los caracteres españoles donde están en un teclado normal. La mayor parte de los keycaps están en inglés, pero los hay en español ... dicen por ahí. Yo no los he encontrado. Puede que los venda un unicornio en una tienda al final del arcoiris.

Hay algunas teclas que no podremos mapear porque, simplemente, no existen en el teclado. En un teclado ISO hay una tecla para "mayor que" y "menor que", pero en el ANSI no existe. Para solucionarlo podemos seguir usando el mapa en español y asignar esos valores a alguna combinación de teclas exótica, como AltGr con "z" y "x". Para eso usaremos xmodmap.

xmodmap sirve para establecer combinaciones de teclas y modificadores (como "Shift", o "AltGr"). Aunque no lo sepas, ya lo estás usando: tu entorno de escritorio lo llama al arrancar la sesión para configurar el mapa de teclado que vas a usar. En Gnome se puede configurar el teclado para que la combinación de teclas Windows+Espacio cambie el mapa de teclado entre varios predefinidos, y lo que hace "por debajo" es usar xmodmap para cargar un mapa u otro.

Como lo que queremos es cambiar las combinaciones de las letras "z" y "x", lo primero que tenemos que hacer es ver la configuración actual para ellas. Podemos ver toda la configuración actual de xmodmap con este comando:

$ xmodmap -pke

Cada línea tiene esta forma:

keycode XX: valor1 valor2 valor3 valor4 valor5 valor6 valor7 valor8 valor9 valor10

Donde:


El primer valor es el del keycode sin ningún tipo de modificador. Por ejemplo, si pulsamos la letra "z". El segundo es el del keycode aplicando el modificador "Shift": al pulsar "z" y "Shift", el resultado sería la "Z". A partir del segundo valor la cosa se complica: cada valor es el de la tecla con uno de los modificadores de X ("mode_switch", "Alt", "Meta", "Hyper" ...), y a su lado el mismo con la tecla "Shift".

El par de valores que nos interesa es el tercero, que corresponde a las combinaciones con "AltGr" (puedes fiarte de mí o ir a este enlace, donde lo explican con más detalle). Para buscar los keycodes que corresponden a la "z" y a la "x" podemos buscar los pares "z Z" y "x X" con grep en la salida del comando que te puse antes:

$ xmodmap -pke | grep "z Z"
$ xmodmap -pke | grep "x X"

Que tendrá que darte una salida parecida a esto:

$ xmodmap -pke | grep "z Z"
keycode  52 = z Z z Z guillemotleft less
$ xmodmap -pke | grep "x X" 
keycode  53 = x X x X guillemotright greater

Como puedes ver, las letras y números tienen una representación evidente (ellos mismos: "z" es "z"), pero para otros caracteres hay nombres especiales. "guillemotleft" es una comilla angular izquierda («), "guillemotright" es su equivalente derecha (»), "less" es el símbolo "menor que" y "greater" es el símbolo "mayor que".

Te habrás dado cuenta de que  "menor que" y "mayor que" ya están entre las combinaciones de teclas disponibles para "z" y "x". En concreto, como es la segunda opción del tercer par, estarían accesibles usando AltGr+Shift+"z" y AltGr+Shift+"x", respectivamente. Podrías dejarlo así; pero personalmente, uso mucho más a menudo esos dos caracteres que las comillas angulares, y prefiero ahorrarme la pulsación de un modificador. Por lo tanto, podemos intercambiar sus valores para que los símbolos de "menor que" y "mayor que" salgan sólo con AltGr. Los comandos para hacerlo serían éstos:

$ xmodmap -e "keycode 52 = z Z z Z less guilllemotleft"
$ xmodmap -e "keycode 53 = x X x X greater guillemotright"

¡Tachán! Ahora al pulsar AltGr+"z" saldrá el símbolo "<", y al pulsar AltGr+"x" saldrá ">".

Todavía se podrían hacer más cosas con xmodmap. Otra tecla que no existe en un teclado ANSI es la del símbolo "primero/primera" que tenemos en el ISO español, y podríamos buscarla en el mapa actual y asociarla a otra combinación de teclas. Las posibilidades son infinitas.

Teclado ANSI con mapa "nativo"

Hay otra opción si tienes un teclado ANSI americano, aunque requiere más esfuerzo: usarlo como tal. Para eso vas a necesitar activar un mapa de teclado especial, que cubre las teclas estándar de un teclado ANSI americano pero añade cambios que hacen posible usarlo con caracteres internacionales. Este mapa se llama, apropiadamente, "US International". Y hay dos variantes, de la que vamos a usar la que usa dead keys.

Las dead keys son las teclas que no muestran ningún caracter por sí solas, sino que lo hacen al combinarlas con otra. El ejemplo más sencillo son las vocales con tilde. Para escribir "á", pulsas primero la tecla de la tilde y luego la tecla "a". El mapa de teclado internacional de US con "teclas muertas" hace eso mismo, con lo que teclear letras con tilde se hace igual que en el teclado ISO español.

Otras combinaciones de teclas habituales para nosotros son algo más complicadas: por ejemplo, para la letra eñe hay que usar AltGr+"n". Si nos molesta podemos usar xmodmap para algo más cómodo. He visto gente en Internet que cambia la combinación de tilde y n (que por defecto muestra "ń") por la eñe. Dejo otros ejemplos como ejercicio para el lector.

En Gnome o KDE podemos cambiar la configuración del teclado para alternar entre varios mapas, y lo único que tendríamos que hacer es añadir el internacional US con teclas muertas para usarlo siempre que quisiéramos usar el teclado ANSI. En Unity (Ubuntu) el cambio está vinculado por defecto a la tecla Windows y la barra espaciadora, y el mapa usado se muestra en un pequeño icono de la barra de estado.

Si lo queremos hacer por las bravas, también podemos abrir una consola y usar setxkbmap:

$ setxkbmap us -variant intl

Pero es lo mismo que conseguirás con el "switcher" de tu escritorio.

Personalmente, yo he optado por la segunda opción. Sigo usando un teclado ISO en español en el trabajo, porque tampoco quiero convertirme en un paria a base de usar un teclado que no usa nadie a mi alrededor, pero me parece que alternar entre mapas de teclado es una buena gimnasia mental. Tecleo un poco más despacio, pero a cambio puedo usar un teclado con un tacto fantástico. Y eso me recuerda una estrofa de una canción que recordarás si eres un viejo como yo:

Dicen que tienes veneno en la piel
y es que estás hecha de plástico fino.
Dicen que tienes un tacto divino
y quien te toca se queda con él.

No tiene veneno ni piel, está hecho de plástico pero no especialmente fino, tiene un tacto divino ... y no sé si quien lo toca se queda con él, pero mi mujer ya ha hecho varios comentarios que me inducen a pensar que pronto habrá otro teclado como éste en casa.

Referencias





por Roberto Suárez Soto (noreply@blogger.com) el December 31, 2016 07:23 PM

November 05, 2016

Intro a Git e cómo usar GitHub ou GitLab para xestionar as túas prácticas de clase

Todos comezamos na carreira creando mil archivos con versións diferentes das nosas prácticas

practica, practica-v1, practica-final, practica-final-final...

Nesta charla ímos a ensinar o máis completo dos sistemas de control de versións, que por riba é Sóftware Libre e na que os asistentes aprenderán os conceptos básicos sobre a popular ferramenta de control de versións Git, empregada para a xestión do código en proxectos tan importantes como o Kernel de Linux, Gnome, KDE, PostgreSQL,...

Permítenos traballar en equipo sobre o mesmo código e ao mesmo tempo sen ter problemas, ter as diversas versións da nosa aplicación facilmente estruturadas, almacenar o código nun repositorio externo e moitísimas cousas máis, moi útiles no noso traballo diario como desenvolvedores.

A entrada como sempre é libre ;)

AdjuntoTamaño
git.png175.9 KB

por gpul el November 05, 2016 07:18 PM

November 02, 2016

[Santiago] I Semana da Cultura Libre

En GPUL non paramos! Esta vez temos un grupo de inquedos por Santiago de Compostela que non hay quen os deteña ;)

É por iso que a semana do 14 ao 19 de novembro organizamos a I Semana da Cultura Libre xunto coa xente do Matadoiro de Compostela e na que imos ter actividades do máis variado para quenes se queiran introducir no mundo libre.

Taller de ferramentas libres para edición de vídeo e son
Luns (14/11/2016) 17:00-21:00

Café GNU
Mércores (16/11/2016) 19:00

Install Party
Sábado (19/11/2016) 12:00-18:00

 

Podedes atopar máis información no seguinte link:

http://www.matadoirocompostela.com/blog/2016/11/02/i-semana-da-cultura-libre/

 

AdjuntoTamaño
culturalibre.jpg83.53 KB

por gpul el November 02, 2016 07:48 PM

October 25, 2016

Hackathiño de datos abertos - Resumo

A pasada fin de semana organizamos o Hackathiño de Datos Abertos co que demos comezo a nosa andadura dentro dun novo campo, os Datos Abertos.

O primeiro é agradecer enormemente a tódolos que colaboraron na organización do evento, tanto os amigos de Árticos, GDG e Coruña Dixital como a Universidade da Coruña por cedernos o espazo.

O evento tivo moi boa acollida con máis 35 asistentes e incluso algún extra nas charlas introductorias nas que Juan Romero de OpenKratio nos conseguíu introducir na importancia de abrir os datos dun xeito estándar e reusable, lonxe dos típicos pdfs e como base dunha boa democracia.
Tamén escoitamos da man de Lluis Esquerda as súas vivencias dentro de este campo a través da súa experiencia nun proxecto real, citybik.es, que servíu como referente aos proxectos que participaron.
Entre os asistentes tivemos a concelleira de participación que aproveitou para contarnos un proxecto baseado nos datos abertos que ten o concello e incluso recibíu feedback e suxerencias dos asistentes.


O evento tivo lugar na NORMAL, decidimos sair unha vez máis da Facultade para achegarnos máis á cidade e aos colectivos que quixeran participar, xa que a idea inicial é que foran eles os que tamén propuxeran proxectos.

Ata 5 equipos participaron no evento, o segundo clasificado foi Open Clean Energy que tiña por obxectivo comparar datos meteorolóxicos e de consumos enerxéticos para coñecer en qué tipo de enerxía é máis rendible invertir en función da zona de España na que te atopes.
Finalmente o gañador foi OpenPet, un proxecto que busca abrir os datos das protectoras de animais para facilitar a súa adopción. O equipo creou unha API aberta que inicialmente parsea os datos de varias webs de protectoras para agregalos, unha web para visualizalos e ao mesmo tempo un bot de twitter que permite twitear os novos animais a medida que son engadidos.

Tamén se traballou nun primeiro borrador dos puntos que debería tocar unha ordenanza de transparencia e por último agregouse bicicoruña e varios sistemas máis de bicicletas dentro da API de citybik.es

Unha fin de semana de traballo moi productiva na que se liberaron moitos datos e que senta unha base para continuar traballando neste campo, probablemente no Ateneo Atlántico de Prototipado que os amigos de Coruña Dixital están a argallar!

Por suposto, todo feedback é benvido e estamos encantados de recibir propostas de accións e todo tipo de colaboracións para seguir formando e promovendo os Datos Abertos e as tecnoloxías libres entre a sociedade ;)

Fotos do evento: https://www.facebook.com/963281590451088/photos/?tab=album&album_id=1064463823666197

 

AdjuntoTamaño
hack.jpg250.74 KB
sobrevivin.jpg206.66 KB
team.jpg225.49 KB

por gpul el October 25, 2016 07:31 PM

October 08, 2016

XII Xornadas de introducción a GNU/Linux e Software Libre

E seguimos neste comezo de curso a tope, esta vez cun pequeno taller de introducción ao Software Libre e a GNU/Linux no que coma tódolos anos, ensinarémosvos a todos os que o desexedes, os comandos básicos para traballar coa terminal en GNU/Linux, mais concretamente da distribución Ubuntu que se utiliza na Facultade, e faremos unha pequena intro ao que é o software libre e porqué mola tanto.

A charla terá lugar o próximo Xoves 13 de Outubro no laboratorio 1.2 de 18:00 - 20:00

A entrada é totalmente libre e non é preciso apuntarse, así que esperámosvos!! :)

AdjuntoTamaño
intro_linux.png267.66 KB

por gpul el October 08, 2016 01:11 PM

October 06, 2016

Festa vixésimo aniversario de KDE

Este ano estase a celebrar o XX aniversario da comunidade KDE.

Por ese motivo, imos organizar unha celebración de aniversario en Santiago de Compostela o día 15 de outubro.

Este evento terá lugar en Matadoiro Compostela, na Praza Matadoiro.

Para poder asistir será preciso completar o formulario de inscripción.

A actividade comezará ás 20:30 cunha poñencia titulada: "Coma colaborar con KDE e outros proxectos de Software Libre". Nesta charla explicaranse diversas formas de colaborar cos proxectos de software libre, centrándose no caso de KDE, tanto no que é a propia programación e desenvolvemento de código coma noutras áreas (traducción, promoción, etc.).

Despois terá lugar unha cea para celebrar o viséximo aniversario de KDE e a inaguración do local de GPUL en Santiago. O prezo aproximado da cea será de 10€ por persoa.

 

por fid_jose el October 06, 2016 07:43 PM

GPUL se expande a Santiago de Compostela

GPUL organizará actividades en Santiago de Compostela. Estas actividades serán  complementarias a las que realicemos en A Coruña y tendrán lugar principalmente en Matadoiro Compostela (Praza do Matadoiro s/n).

La primera actividad que realizaremos será el vigésimo aniversario de la comunidad KDE que tendrá lugar el día 15.

Otra actividad que estamos preparando es un taller sobre tratado y edicion de imagen con software libre que se anunciará próximamente

por fid_jose el October 06, 2016 07:39 PM

October 05, 2016

Frogr 1.2 released

Of course, just a few hours after releasing frogr 1.1, I’ve noticed that there was actually no good reason to depend on gettext 0.19.8 for the purposes of removing the intltool dependency only, since 0.19.7 would be enough.

So, as raising that requirement up to 0.19.8 was causing trouble to package frogr for some distros still in 0.19.7 (e.g. Ubuntu 16.04 LTS), I’ve decided to do a quick new release and frogr 1.2 is now out with that only change.

One direct consequence is that you can now install the packages for Ubuntu from my PPA if you have Ubuntu Xenial 16.04 LTS or newer, instead of having to wait for Ubuntu Yakkety Yak (yet to be released). Other than that 1.2 is exactly the same than 1.1, so you probably don’t want to package it for your distro if you already did it for 1.1 without trouble. Sorry for the noise.

 

por mario el October 05, 2016 01:46 PM

Frogr 1.1 released

After almost one year, I’ve finally released another small iteration of frogr with a few updates and improvements.

Screenshot of frogr 1.1

Not many things, to be honest, bust just a few as I said:

Besides, another significant difference compared to previous releases is related to the way I’m distributing it: in the past, if you used Ubuntu, you could configure my PPA and install it from there even in fairly old versions of the distro. However, this time that’s only possible if you have Ubuntu 16.10 “Yakkety Yak”, as that’s the one that ships gettext >= 0.19.8, which is required now that I removed all trace of intltool (more info in this post).

However, this is also the first time I’m using flatpak to distribute frogr so, regardless of which distribution you have, you can now install and run it as long as you have the org.gnome.Platform/x86_64/3.22 stable runtime installed locally. Not too bad! :-). See more detailed instructions in its web site.

That said, it’s interesting that you also have the portal frontend service and a backend implementation, so that you can authorize your flickr account using the browser outside the sandbox, via the OpenURI portal. If you don’t have that at hand, you can still used the sandboxed version of frogr, but you’d need to copy your configuration files from a non-sandboxed frogr (under ~/.config/frogr) first, right into ~/.var/app/org.gnome.Frogr/config, and then it should be usable again (opening files in external viewers would not work yet, though!).

So this is all, hope it works well and it’s helpful to you. I’ve just finished uploading a few hundreds of pictures a couple of days ago and it seemed to work fine, but you never know… devil is in the detail!

 

por mario el October 05, 2016 01:24 AM

October 04, 2016

Hackathiño de Datos Abertos

Este ano comezamos forte da man de Árticos, do GDG Coruña e dentro do programa Coruña Dixital centrándonos no Open Data a través do Hackatiño de Datos Abertos que terá lugar os próximos días 22 e 23 de Outubro no espacio Normal da Universidade da Coruña.

O Hackathiño é un hackathon, pero máis da terra e que nesta edición vaise centrar en pequenos colectivos, asociacións, pemes, etc co obxectivo de animalos a liberar os seus datos e mostrarlles as cousas superinteresantes que se poden chegar a facer apostando polos datos abertos.

Desenvolvedores e colectivos propoñerán proxectos e traballarán en equipo durante unha fin de semana co fin de presentar o domingo pola tarde un pequeno prototipo funcional baseado en datos abertos.

A entrada é totalmente de balde e por suposto teremos algún premio para os mellores proxectos e comida para tódolos asistentes que disfrutarán dun ambiente de innovación e no que pasar unha fin de semana aprendendo, facendo networking e por suposto, pasandoo moi ben.

Tamén temos uns posts nos que propoñer ideas e fontes de datos para utilizar durante o evento e animámovos enormemente a aportar ideas ;)

Mais información na web hackathino.gpul.org

 

AdjuntoTamaño
hackathino_small.png606.85 KB

por gpul el October 04, 2016 08:23 PM

September 30, 2016

Cross-compiling WebKit2GTK+ for ARM

I haven’t blogged in a while -mostly due to lack of time, as usual- but I thought I’d write something today to let the world know about one of the things I’ve worked on a bit during this week, while remotely attending the Web Engines Hackfest from home:

Setting up an environment for cross-compiling WebKit2GTK+ for ARM

I know this is not new, nor ground-breaking news, but the truth is that I could not find any up-to-date documentation on the topic in a any public forum (the only one I found was this pretty old post from the time WebKitGTK+ used autotools), so I thought I would devote some time to it now, so that I could save more in the future.

Of course, I know for a fact that many people use local recipes to cross-compile WebKit2GTK+ for ARM (or simply build in the target machine, which usually takes a looong time), but those are usually ad-hoc things and hard to reproduce environments locally (or at least hard for me) and, even worse, often bound to downstream projects, so I thought it would be nice to try to have something tested with upstream WebKit2GTK+ and publish it on trac.webkit.org,

So I spent some time working on this with the idea of producing some step-by-step instructions including how to create a reproducible environment from scratch and, after some inefficient flirting with a VM-based approach (which turned out to be insanely slow), I finally settled on creating a chroot + provisioning it with a simple bootstrap script + using a simple CMake Toolchain file, and that worked quite well for me.

In my fast desktop machine I can now get a full build of WebKit2GTK+ 2.14 (or trunk) in less than 1 hour, which is pretty much a productivity bump if you compare it to the approximately 18h that takes if I build it natively in the target ARM device I have 🙂

Of course, I’ve referenced this documentation in trac.webkit.org, but if you want to skip that and go directly to it, I’m hosting it in a git repository here: github.com/mariospr/webkit2gtk-ARM.

Note that I’m not a CMake expert (nor even close) so the toolchain file is far from perfect, but it definitely does the job with both the 2.12.x and 2.14.x releases as well as with the trunk, so hopefully it will be useful as well for someone else out there.

Last, I want to thanks the organizers of this event for making it possible once again (and congrats to Igalia, which just turned 15 years old!) as well as to my employer for supporting me attending the hackfest, even if I could not make it in person this time.

Endless Logo

por mario el September 30, 2016 07:10 PM

September 21, 2016

Taller de instalación de GNU/Linux

 

 

Un ano máis GPUL e a Oficina de Software Libre estamos a organizar dous obradoiros coa idea de axudar aos estudantes da Facultade de Informática a instalar e configurar un sistema operativo libre no seu portatil.

Ademais, darase a coñecer as características máis importante do Software Libre, do sistema operativo instalado, e resolveranse todas aquelas preguntas que poidan ter sobre o tema.

O evento terá lugar en dúas quendas, unha de maña e outra de tarde na Aula 2.1a:

27 de setembro de 10:30-14:30

28 de setembro de 15:30-19:30

O acceso ao taller realizarase previa inscrición ata completar o aforo da aula (30 persoas):

http://osl.cixug.es/taller-de-instalacion-de-gnulinux-na-facultade-de-informatica-da-udc-2/

AdjuntoTamaño
cartel_impresion.png251.14 KB

por gpul el September 21, 2016 08:49 PM

August 31, 2016

Asamblea extraordinaria - 14/09/2016

Pola presente, convócase Asamblea Extraordinaria de GPUL para o mercores 14 de setembro de 2016 na Aula 2.1b da Facultade de Informática.

Primeira convocatoria: 20:00
Segunda convocatoria: 20:30

Orde do día:

  • Lectura e aprobación, se procede, da Acta da Asemblea anterior.
  • Lectura de altas e baixas de socios desde a última Asemblea.
  • Presentación das actividades realizadas e as previstas para o presente ano.
  • Discusión para determinar o futuro de GPUL.
  • Rogos e preguntas.


ACTUALIZACIÓN

Finalmente a asamblea terá lugar na Aula 2.1b ao atoparse a Aula de Graos ocupada na citada data.


Asdo.
Pablo Castro,
Secretario do GPUL.

por castrinho8 el August 31, 2016 09:27 PM

May 24, 2016

I/O bursts with QEMU 2.6

QEMU 2.6 was released a few days ago. One new feature that I have been working on is the new way to configure I/O limits in disk drives to allow bursts and increase the responsiveness of the virtual machine. In this post I’ll try to explain how it works.

The basic settings

First I will summarize the basic settings that were already available in earlier versions of QEMU.

Two aspects of the disk I/O can be limited: the number of bytes per second and the number of operations per second (IOPS). For each one of them the user can set a global limit or separate limits for read and write operations. This gives us a total of six different parameters.

I/O limits can be set using the throttling.* parameters of -drive, or using the QMP block_set_io_throttle command. These are the names of the parameters for both cases:

-drive block_set_io_throttle
throttling.iops-total iops
throttling.iops-read iops_rd
throttling.iops-write iops_wr
throttling.bps-total bps
throttling.bps-read bps_rd
throttling.bps-write bps_wr

It is possible to set limits for both IOPS and bps at the same time, and for each case we can decide whether to have separate read and write limits or not, but if iops-total is set then neither iops-read nor iops-write can be set. The same applies to bps-total and bps-read/write.

The default value of these parameters is 0, and it means unlimited.

In its most basic usage, the user can add a drive to QEMU with a limit of, say, 100 IOPS with the following -drive line:

-drive file=hd0.qcow2,throttling.iops-total=100

We can do the same using QMP. In this case all these parameters are mandatory, so we must set to 0 the ones that we don’t want to limit:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0
     }
   }

I/O bursts

While the settings that we have just seen are enough to prevent the virtual machine from performing too much I/O, it can be useful to allow the user to exceed those limits occasionally. This way we can have a more responsive VM that is able to cope better with peaks of activity while keeping the average limits lower the rest of the time.

Starting from QEMU 2.6, it is possible to allow the user to do bursts of I/O for a configurable amount of time. A burst is an amount of I/O that can exceed the basic limit, and there are two parameters that control them: their length and the maximum amount of I/O they allow. These two can be configured separately for each one of the six basic parameters described in the previous section, but here we’ll use ‘iops-total’ as an example.

The I/O limit during bursts is set using ‘iops-total-max’, and the maximum length (in seconds) is set with ‘iops-total-max-length’. So if we want to configure a drive with a basic limit of 100 IOPS and allow bursts of 2000 IOPS for 60 seconds, we would do it like this (the line is split for clarity):

   -drive file=hd0.qcow2,
          throttling.iops-total=100,
          throttling.iops-total-max=2000,
          throttling.iops-total-max-length=60

Or with QMP:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0,
        "iops_max": 2000,
        "iops_max_length": 60,
     }
   }

With this, the user can perform I/O on hd0.qcow2 at a rate of 2000 IOPS for 1 minute before it’s throttled down to 100 IOPS.

The user will be able to do bursts again if there’s a sufficiently long period of time with unused I/O (see below for details).

The default value for ‘iops-total-max’ is 0 and it means that bursts are not allowed. ‘iops-total-max-length’ can only be set if ‘iops-total-max’ is set as well, and its default value is 1 second.

Controlling the size of I/O operations

When applying IOPS limits all I/O operations are treated equally regardless of their size. This means that the user can take advantage of this in order to circumvent the limits and submit one huge I/O request instead of several smaller ones.

QEMU provides a setting called throttling.iops-size to prevent this from happening. This setting specifies the size (in bytes) of an I/O request for accounting purposes. Larger requests will be counted proportionally to this size.

For example, if iops-size is set to 4096 then an 8KB request will be counted as two, and a 6KB request will be counted as one and a half. This only applies to requests larger than iops-size: smaller requests will be always counted as one, no matter their size.

The default value of iops-size is 0 and it means that the size of the requests is never taken into account when applying IOPS limits.

Applying I/O limits to groups of disks

In all the examples so far we have seen how to apply limits to the I/O performed on individual drives, but QEMU allows grouping drives so they all share the same limits.

This feature is available since QEMU 2.4. Please refer to the post I wrote when it was published for more details.

The Leaky Bucket algorithm

I/O limits in QEMU are implemented using the leaky bucket algorithm (specifically the “Leaky bucket as a meter” variant).

This algorithm uses the analogy of a bucket that leaks water constantly. The water that gets into the bucket represents the I/O that has been performed, and no more I/O is allowed once the bucket is full.

To see the way this corresponds to the throttling parameters in QEMU, consider the following values:

  iops-total=100
  iops-total-max=2000
  iops-total-max-length=60

bucket

The bucket is initially empty, therefore water can be added until it’s full at a rate of 2000 IOPS (the burst rate). Once the bucket is full we can only add as much water as it leaks, therefore the I/O rate is reduced to 100 IOPS. If we add less water than it leaks then the bucket will start to empty, allowing for bursts again.

Note that since water is leaking from the bucket even during bursts, it will take a bit more than 60 seconds at 2000 IOPS to fill it up. After those 60 seconds the bucket will have leaked 60 x 100 = 6000, allowing for 3 more seconds of I/O at 2000 IOPS.

Also, due to the way the algorithm works, longer burst can be done at a lower I/O rate, e.g. 1000 IOPS during 120 seconds.

Acknowledgments

As usual, my work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the QEMU development team.

igalia-outscale

Enjoy QEMU 2.6!

por berto el May 24, 2016 11:47 AM

May 17, 2016

GPUL Summer Of Code

De que vai isto?

No noso afán por difundir o software libre buscamos contratar a un estudante interesado por estas tecnoloxías durante 3 meses, a media xornada; para que, con tecnoloxías desta era, nos axude cun par de proxectos baseados en tecnoloxías web que queremos impulsar dende GPUL, por suposto software libre.

 Requisitos básicos

  • Indispensable ser defensor do movemento do software libre.

  • Para que o proxecto web sexa mantible necesitamos que coñezas algo de:

    • Patrón MVC (Modelo Vista Controlador).

    • Programación Orientada a Obxectos.

    • Git.

    • ORM (Object-Relational Mapping) ou similar.

    Non é preciso ser experto. Ao desenvolver software libre, calquera poderá axudarche a mellorar o teu código.

  • Ter moitas ganas de aprender.

  • Non imos mirar para o título asique tanto se estudas outra carreira, un ciclo ou se eres autodidacta tamén te podes apuntar sen problema.

 

Sería xenial (non é un requisito, pero se queres destacar entre os máis, estas son unhas guías do que nos interesa)

  • Coñecemento dalgún framework web: Django, Rails, ExpressJS, SpringMVC, Laravel…

  • Que en lugar de jQuery nos digas que sabes usar JavaScript en AngularJS ou ReactJS…

  • Metodoloxías áxiles: Scrum, eXtreme Programming, Code Review, Continuous Integration…

  • Que coñezas o teu IDE favorito: Eclipse, Atom, Emacs, vim…

  • Administración de sistemas Linux: Bash, Debian, Docker, SSH…

  • Que teñas colaborado con comunidades de software libre, con desenvolvementos, organizando eventos ou de algún outro xeito.

  • Que queiras utilizar isto como parte do teu Traballo Fin de Mestrado, de Grado ou de Proxecto de Fin de Carreira ou similar.

  

Que ofrecemos

  • Horario flexible.

  • Utilizar o noso local na facultade para traballar se o prefires ao teletraballo.

  • Reunións de seguimento tódalas semanas.

  • Desenvolver software libre da man de mentores con experiencia.

  • Asesorarémoste para presentar o teu proxecto ao Concurso Universitario de Software Libre e ao Premio ao mellor TFG de Software Libre ou similares.

  • Integrarte en GPUL e en tódalas actividades que organizamos: charlas técnicas nas que aprender dos mellores profesionais, hackatons e viaxes en grupo a eventos como a FOSDEM.


Sobre GPUL

En GPUL somos unha asociación, creada na FIC no ano 1998 para fomentar e expandir o uso de software libre. Hoxe continuamos activos para tratar de facer de mundo un lugar mellor co fomento da cultura libre, sempre nun bo ambiente, aprendendo e medrando, persoal e profesionalmente.

Fomos os responsables da organización de varios eventos internacionais de primeiro nivel no ámbito, como a GUADEC no 2012 ou a Akademy no 2015 (os principais encontros dos usuarios e desenvolvedores de Gnome e KDE respectivamente).

Extrapolamos o que aprendemos ahí para volcarnos logo noutros eventos, que aínda que non sexan internacionais, organizámolos co mesmo mimo, como son as Xornadas de Introdución a GNU/Linux ou as Xornadas Libres, nas que sempre contamos con xente moi boa de múltiples lugares do mundo a que veñan contarnos cousas.

Agora ademais, cos GPUL Labs estamos xuntando unha comunidade de desenvolvedores de software libre na nosa cidade, e así non ter que marchar a outros lugares para facer cousas interesantes con tecnoloxías modernas e construír proxectos reais.

 

Proceso

Mándanos o teu currículo a info@gpul.org e un enlace a LinkedIn (se tes) así como o teu expediente académico.

Se fixeches software libre no pasado, inclúe un enlace a onde poidamos ver as túas contribucións. Se non, non te preocupes! Pero esta ben que nos achegues tamén algo de código que escribiras ti e que te faga sentir orgulloso, para termos algo máis co que poder valorarte, unha práctica da carreira ou algo similar pode valer.

E por último pode que queiramos ter unha entrevista contigo, ben en vivo ou por videoconferencia.

Aceptaranse propostas ata o día 12 de Xuño incluido e resolverase a máis tardar o día 15 para facilitar que o alumno seleccionado, se decide utilizar o proxecto desenvolto como PFC/TFG/TFM, poida presentar o anteproxecto a tempo.

 

Salario e contrato

Ofrecemos un contrato a media xornada durante 3 meses cunha retribución de 400 € ó mes para que poidas combinalo cos teus estudos e ao mesmo tempo ter unha primeira experiencia laboral.

O plan inicial é realizalo entre os meses de Xuño e Setembro e somos flexibles co horario, se tes algunha circunstancia especial como exámenes ou similar non dubides en falar connosco.

 

Trucos

Se utilizas na actualidade software privativo habitualmente, fala con nós. En serio, darte conta e o interese en querer cambialo vainos causar boa impresión.

Non nos envíes documentos en formatos privativos, coma doc ou docx. Porque quéresnos causar boa impresión. Non si?

Se aínda non fixeches software libre, non te preocupes. Tal vez o teu primeiro proxecto libre sexa co GPUL.

Non asines o correo con cousas como “Enviado desde mi iPhone”.

 

Esta actividade forma parte das actividades que a asociación desenvolve en colaboración coa AMTEGA ao abeiro do convenio de colaboración asinado para a difusión do software libre en Galicia e forma parte do Plan de Acción en materia de software libre 2016 da Amtega.

AdjuntoTamaño
oferta.png309.22 KB

por gpul el May 17, 2016 03:30 PM

April 13, 2016

Chromium Browser on xdg-app

Last week I had the chance to attend for 3 days the GNOME Software Hackfest, organized by Richard Hughes and hosted at the brand new Red Hat’s London office.

And besides meeting new people and some old friends (which I admit to be one of my favourite aspects about attending these kind of events), and discovering what it’s now my new favourite place for fast-food near London bridge, I happened to learn quite a few new things while working on my particular personal quest: getting Chromium browser to run as an xdg-app.

While this might not seem to be an immediate need for Endless right now (we currently ship a Chromium-based browser as part of our OSTree based system), this was definitely something worth exploring as we are now implementing the next version of our App Center (which will be based on GNOME Software and xdg-app). Chromium updates very frequently with fixes and new features, and so being able to update it separately and more quickly than the OS is very valuable.

Endless OS App Center
Screenshot of Endless OS’s current App Center

So, while Joaquim and Rob were working on the GNOME Software related bits and discussing aspects related to Continuous Integration with the rest of the crowd, I spent some time learning about xdg-app and trying to get Chromium to build that way which, unsurprisingly, was not an easy task.

Fortunately, the base documentation about xdg-app together with Alex Larsson’s blog post series about this topic (which I wholeheartedly recommend reading) and some experimentation from my side was enough to get started with the whole thing, and I was quickly on my way to fixing build issues, adding missing deps and the like.

Note that my goal at this time was not to get a fully featured Chromium browser running, but to get something running based on the version that we use use in Endless (Chromium 48.0.2564.82), with a couple of things disabled for now (e.g. chromium’s own sandbox, udev integration…) and putting, of course, some holes in the xdg-app configuration so that Chromium can access the system’s parts that are needed for it to function (e.g. network, X11, shared memory, pulseaudio…).

Of course, the long term goal is to close as many of those holes as possible using Portals instead, as well as not giving up on Chromium’s own sandbox right away (some work will be needed here, since `setuid` binaries are a no-go in xdg-app’s world), but for the time being I’m pretty satisfied (and kind of surprised, even) that I managed to get the whole beast built and running after 4 days of work since I started :-).

But, as Alberto usually says… “screencast or it didn’t happen!”, so I recorded a video yesterday to properly share my excitement with the world. Here you have it:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/euwSnOm89hM" width="560"></iframe>
[VIDEO: Chromium Browser running as an xdg-app]

As mentioned above, this is work-in-progress stuff, so please hold your horses and manage your expectations wisely. It’s not quite there yet in terms of what I’d like to see, but definitely a step forward in the right direction, and something I hope will be useful not only for us, but for the entire Linux community as a whole. Should you were curious about the current status of the whole thing, feel free to check the relevant files at its git repository here.

Last, I would like to finish this blog post saying thanks specially to Richard Hughes for organizing this event, as well as the GNOME Foundation and Red Hat for their support in the development of GNOME Software and xdg-app. Finally, I’d also like to thank my employer Endless for supporting me to attend this hackfest. It’s been a terrific week indeed… thank you all!

Credit to Georges Stavracas

Credit to Georges Stavracas

por mario el April 13, 2016 11:17 AM

February 18, 2016

Improving Media Source Extensions on WebKit ports based on GStreamer

During 2014 I started to become interested on how GStreamer was used in WebKit to play media content and how it was related to Media Source Extensions (MSE). Along 2015, my company Igalia strenghtened its cooperation with Metrological to enhance the multimedia support in their customized version of WebKitForWayland, the web platform they use for their products for the set-top box market. This was an opportunity to do really interesting things in the multimedia field on a really nice hardware platform: Raspberry Pi.

What are Media Source Extensions?

Normal URL playback in the <video> tag works by configuring the platform player (GStreamer in our case) with a source HTTP URL, so it behaves much like any other external player, downloading the content and showing it in a window. Special cases such as Dynamic Adaptive Streaming over HTTP (DASH) are automatically handled by the player, which becomes more complex. At the same time, the JavaScript code in the webpage has no way to know what’s happening with the quality changes in the stream.

The MSE specification lets the authors move the responsibility to the JavaScript side in that kind of scenarios. A Blob object (Blob URL) can be configured to get its data from a MediaSource object. The MediaSource object can instantiate SourceBuffer objects. Video and Audio elements in the webpage can be configured with those Blob URLs. With this setup, JavaScript can manually feed binary data to the player by appending it to the SourceBuffer objects. The data is buffered and the playback time ranges generated by the data are accessible to JavaScript. The web page (and not the player) has now the control on the data being buffered, its quality, codec and procedence.  Now it’s even possible to synthesize the media data programmatically if needed, opening the door to media editors and media effects coded in JavaScript.

mse1

MSE is being adopted by the main content broadcasters on the Internet. It’s required by YouTube for its dedicated interface for TV-like devices and they even have an MSE conformance test suite that hardware manufacturers wanting to get certified for that platform must pass.

MSE architecture in WebKit

WebKit is a multiplatform framework with an end user API layer (WebKit2), an internal layer common to all platforms (WebCore) and particular implementations for each platform (GObject + GStreamer, in our case). Google and Apple have done a great work bringing MSE to WebKit. They have led the effort to implement the common WebCore abstractions needed to support MSE, such as MediaSource, SourceBuffer, MediaPlayer and the integration with HTMLMediaElement (video tag). They have also provided generic platform interfaces (MediaPlayerPrivateInterface, MediaSourcePrivate, SourceBufferPrivate) a working platform implementation for Mac OS X and a mock platform for testing.

mse2

The main contributions to the platform implementation for ports using GStreamer for media playback were done by Stephane Jadaud and Sebastian Dröge on bugs #99065 (initial implementation with hardcoded SourceBuffers for audio and video), #139441 (multiple SourceBuffers) and #140078 (support for tracks, more containers and encoding formats). This last patch hasn’t still been merged in trunk, but I used it as the starting point of the work to be done.

GStreamer, unlike other media frameworks, is strongly based on the concept of pipeline: the data traverses a series of linked elements (sources, demuxers, decoders, sinks) which process it in stages. At a given point in time, different pieces of data are in the pipeline at the same time in varying degrees of processing stages. In the case of MSE, a special WebKitMediaSrc GStreamer element is used as the data source in the pipeline and also serves as interface with the upper MSE layer, acting as client of MediaSource and SourceBuffer. WebKitMediaSrc is spawned by GstPlayBin (a container which manages everything automatically inside) when an MSE SourceBuffer is added to the MediaSource. The MediaSource is linked with the MediaPlayer, which has MediaPlayerPrivateGStreamer as private platform implementation. In the design we were using at that time, WebKitMediaSrc was responsible for demuxing the data appended on each SourceBuffer into several streams (I’ve never seen more than one stream per SourceBuffer, though) and for reporting the statistics and the samples themselves to the upper layer according to the MSE specs. To do that, the WebKitMediaSrc encapsulated an appsrc, a demuxer and a parser per source. The remaining pipeline elements after WebKitMediaSrc were in charge of decoding and playback.

Processing appends with GStreamer

The MSE implementation in Chromium uses a chunk demuxer to parse (demux) the data appended to the SourceBuffers. It keeps the parsing state and provides a self-contained way to perform the demuxing. Reusing that Chromium code would have been the easiest solution. However, GStreamer is a powerful media framework and we strongly believe that the demuxing stage can be done using GStreamer as part of the pipeline.

Because of the way GStreamer works, it’s easy to know when an element outputs new data but there’s no easy way to know when it has finished processing its input without discontinuing the flow with with End Of Stream (EOS) and effectively resetting the element. One simple approach that works is to use timeouts. If the demuxer doesn’t produce any output after a given time, we consider that the append has produced all the MediaSamples it could and therefore has finished. Two different timeouts were used: one to detect when appends that produce no samples have finished (noDataToDecodeTimeout) and another to detect when no more samples are coming (lastSampleToDecodeTimeout). The former needs to be longer than the latter.

Another technical challenge was to perform append processing when the pipeline isn’t playing. While playback doesn’t start, the pipeline just prerolls (is filled with the available data until the first frame can be rendered on the screen) and then pauses there until the continuous playback can start. However, the MSE spec expects the appended data to be completely processed and delivered to the upper MSE layer first, and then it’s up to JavaScript to decide if the playback on screen must start or not. The solution was to add intermediate queue elements with a very big capacity to force a preroll stage long enough for the probes in the demuxer source (output) pads to “see” all the samples pass beyond the demuxer. This was how the pipeline looked like at that time (see also the full dump):

mse3

While focusing on making the YouTube 2015 tests pass on our Raspberry Pi 1, we realized that the generated buffered ranges had strange micro-holes (eg: [0, 4.9998]; [5.0003, 10.0]) and that was confusing the tests. Definitely, there were differences of interpretation between ChunkDemuxer and qtdemux, but this is a minor problem which can be solved by adding some extra time ranges that fill the holes. All these changes got the append feature in good shape and the we could start watching videos more or less reliably on YouTube TV for the first time.

Basic seek support

Let’s focus on some real use case for a moment. The JavaScript code can be appending video data in the [20, 25] range, audio data in the [30, 35] range (because the [20, 30] range was appended before) and we’re still playing the [0, 5] range. Our previous design let the media buffers leave the demuxer and enter in the decoder without control. This worked nice for sequential playback, but was not compatible with non-linear playback (seeks). Feeding the decoder with video data for [0, 5] plus [20, 25] causes a big pause (while the timeline traverses [5, 20]) followed by a bunch of decoding errors (the decoder needs sequential data to work).

One possible improvement to support non-linear playback is to implement buffer stealing and buffer reinjecting at the demuxer output, so the buffers never go past that point without control. A probe steals the buffers, encapsulates them inside MediaSamples, pumps them to the upper MSE layer for storage and range reporting, and finally drops them at the GStreamer level. The buffers can be later reinjected by the enqueueSample() method when JavaScript decides to start the playback in the target position. The flushAndEnqueueNonDisplayingSamples() method reinjects auxiliary samples from before the target position just to help keeping the decoder sane and with the right internal state when the useful samples are inserted. You can see the dropping and reinjection points in the updated diagram:

mse4

The synchronization issues of managing several independent timelines at once must also be had into account. Each of the ongoing append and playback operations happen in their own timeline, but the pipeline is designed to be configured for a common playback segment. The playback state (READY, PAUSED, PLAYING), the flushes needed by the seek operation and the prerolls also affect all the pipeline elements. This problem can be minimized by manipulating the segments by hand to accomodate the different timings and by getting the help of very large queues to sustain the processing in the demuxer, even when the pipeline is still in pause. These changes can solve the issues and get the “47. Seek” test working, but YouTube TV is more demanding and requires a more structured design.

Divide and conquer

At this point we decided to simplify MediaPlayerPrivateGStreamer and refactor all the MSE logic into a new subclass called MediaPlayerPrivateGStreamerMSE. After that, the unified pipeline was split into N append pipelines (one per SourceBuffer) and one playback pipeline. This change solved the synchronization issues and splitted a complex problem into two simpler ones. The AppendPipeline class, visible only to the MSE private player, is in charge of managing all the append logic. There’s one instance for each of the N append pipelines.

Each append pipeline is created by hand. It contains an appsrc (to feed data into it), a typefinder, a qtdemuxer, optionally a decoder (in case we want to suport Encrypted Media Extensions too), and an appsink (to pick parsed data). In my willing to simplify, I removed the support for all formats except ISO MP4, the only one really needed for YouTube. The other containers could be reintroduced in the future.

mse5

The playback pipeline is what remains of the old unified pipeline, but simpler. It’s still based on playbin, and the main difference is that the WebKitMediaSrc is now simpler. It consists of N sources (one per SourceBuffer) composed by an appsrc (to feed buffered samples), a parser block and the src pads. Uridecodebin is in charge of instantiating it, like before. The PlaybackPipeline class was created to take care of some of the management logic.

mse6

The AppendPipeline class manages the callback forwarding between threads, using asserts to strongly enforce the access to WebCore MSE classes from the main thread. AtomicString and all the classes inheriting from RefCounted (instead of ThreadSafeRefCounted) can’t be safely managed from different threads. This includes most of the classes used in the MSE implementation. However, the demuxer probes and other callbacks sometimes happen in the streaming thread of the corresponding element, not in the main thread, so that’s why call forwarding must be done.

AppendPipeline also uses an internal state machine to manage the different stages of the append operation and all the actions relevant for each stage (starting/stopping the timeouts, process the samples, finish the appends and manage SourceBuffer aborts).

mse7

Seek support for the real world

With this new design, the use case of a typical seek works like this (very simplified):

  1. The video may be being currently played at some position (buffered, of course).
  2. The JavaScript code appends data for the new target position to each of the video/audio SourceBuffers. Each AppendPipeline processes the data and JavaScript is aware of the new buffered ranges.
  3. JavaScript seeks to the new position. This ends up calling the seek() and doSeek() methods.
  4. MediaPlayerPrivateGStreamerMSE instructs WebKitMediaSrc to stop accepting more samples until further notice and to prepare the seek (reset the seek-data and need-data counters). The player private performs the real GStreamer seek in the playback pipeline and leaves the rest of the seek pending for when WebKitMediaSrc is ready.
  5. The GStreamer seek causes some changes in the pipeline and eventually all the appsrc in WebKitMediaSrc emit the seek-data and need-data events. Then WebKitMediaSrc notifies the player private that it’s ready to accept samples for the target position and needs data. MediaSource is notified here to seek and this triggers the enqueuing of the new data (non displaying samples and visible ones).
  6. The pending seek at player private level which was pending from step 4 continues, giving permission to WebKitMediaSrc to accept samples again.
  7. Seek is completed. The samples enqueued in step 5 flow now through the playback pipeline and the user can see the video from the target position.

That was just the typical case, but more complex scenarios are also supported. This includes multiple seeks (pressing the forward/backward button several times), seeks to buffered areas (the easiest ones) and to unbuffered areas (where the seek sequence needs to wait until the data for the target area is appended and buffered).

Close cooperation from qtdemux is also required in order to get accurate presentation timestamps (PTS) for the processed media. We detected a special case when appending data much forward in the media stream during a seek. Qtdemux kept generating sequential presentation timestamps, completely ignoring the TFDT atom, which tells where the timestamps of the new data block must start. I had to add a new “always-honor-tfdt” attribute to qtdemux to solve that problem.

With all these changes the YouTube 2015 and 2016 tests are green for us and YouTube TV is completely functional on a Raspberry Pi 2.

Upstreaming the code during Web Engines Hackfest 2015

All this work is currently in the Metrological WebKitForWayland repository, but it could be a great upstream contribution. Last December I was invited to the Web Engines Hackfest 2015, an event hosted in Igalia premises in A Coruña (Spain). I attended with the intention of starting the upstreaming process of our MSE implementation for GStreamer, so other ports such as WebKitGTK+ and WebKitEFL could also benefit from it. Thanks a lot to our sponsors for making it possible.

At the end of the hackfest I managed to have something that builds in a private branch. I’m currently devoting some time to work on the regressions in the YouTube 2016 tests, clean unrelated EME stuff and adapt the code to the style guidelines. Eventually, I’m going to submit the patch for review on bugzilla. There are some topics that I’d like to discuss with other engineers as part of this process, such as the interpretation of the spec regarding how the ReadyState is computed.

In parallel to the upstreaming process, our plans for the future include getting rid of the append timeouts by finding a better alternative, improving append performance and testing seek even more thoroughly with other real use cases. In the long term we should add support for appendStream() and increase the set of supported media containers and codecs at least to webm and vp8.

Let’s keep hacking!

por eocanha el February 18, 2016 08:10 PM

January 26, 2016

Semana de Anita Borg

Un ano máis volve a semana de Anita Borg a FIC co obxectivo de visibilizar o éxito acadado por moitas mulleres no eido das novas tecnoloxías.

Preséntannos un programa no que se abordará a carreira profesional con ex-alumnas da FIC, así como afrontar o deseño dixital para a diversidade.

Botádelle unha ollada e apuntade as charlas que seguro que vos parecen interesantes ;)

https://wiki.fic.udc.es/semanaanitaborg/eventos/coruna_2016.html

AdjuntoTamaño
POSTER.png325.63 KB

por gpul el January 26, 2016 01:13 AM

January 25, 2016

Comezan os GPUL Labs

Este ano en GPUL decidimos que había que darlle unha volta as nosas actividades habituais e lanzámonos a formar unha comunidade de desenvolvedores preocupados polo Software, o Hardware e a Cultura Libre aquí na Coruña e na nosa comunidade.

De xeito resumido, os <Labs/> son unha serie de talleres, charlas e hackatons de programación baseados en tecnoloxías libres co fin de realizar, de comezo a fin, un proxecto de desenvolvemento software traballando con unha Raspberry Pi, creando unha aplicación web, falando de metodoloxías áxiles de desenvolvementeo ou incluso de boas prácticas como code review ou integración continua.

Se queredes coñecer máis, non dubidedes en pasarvos pola web dos Labs onde poderedes inscribirvos, ver as actividades que pensamos facer, e se queredes, tamén poderedes seguir os videos e o material das actividades, dende o seguinte repositorio de código.

Contamos coa vosa asistencia para montar unha enorme e activa comunidade de Software Libre na Coruña :)

 

 

AdjuntoTamaño
labs-logo.png27.38 KB

por gpul el January 25, 2016 11:45 PM

January 18, 2016

Asamblea Extraordinaria de GPUL

Pola presente, convócase Asamblea Extraordinaria de GPUL para o mercores 3 de febreiro de 2016 na Aula de Graos da Facultade de Informática.

    Primeira convocatoria: 20:00
    Segunda convocatoria: 20:30

Orde do día:

    Lectura e aprobación, se procede, da Acta da Asemblea anterior.
    Lectura de altas e baixas de socios desde a última Asemblea.
    Inicio da votación á Xunta Directiva.
    Reconto de votos.
    Nomeamento da nova Xunta Directiva.
    Discusión e aprobación, se procede, da vontade da asociación para
    ser incluída como Asociación de Utilidad Pública (regulada polo
    RD1740/2003 e con modificacións do RD949/2015), e de inicio do
    procedemento a tal fin, se procede.
    Rogos e preguntas.

En caso de non poder celebrarse na Aula de Graos comunicarase unha aula alternativa con tempo suficiente.

Asdo.
Marcos Chavarría,
Secretario do GPUL.

por marcos.chavarria el January 18, 2016 02:35 PM

December 30, 2015

Convocatoria de Eleccións a Xunta Directiva

Pola presente, convócanse eleccións á Xunta Directiva do GPUL polas seguintes razóns:

  • A petición do Presidente.
  • Por teren transcorrido vintecatro meses desde a última convocatoria de eleccións á Xunta Directiva.

Segundo o Regulamento Electoral (adxunto), a partir de mañá, ábrese o prazo para presentar candidaturas. O calendario electoral queda da seguinte maneira:

  • Data de convocatoria: 23/12/2015
  • Presentación de candidaturas: 24/12/2015 a 08/01/2016
  • Publicación do listado provisional de candidaturas: 11/01/2016
  • Prazo para reclamacións: 11/01/2016 a 13/01/2016
  • Publicación do listado definitivo de candidaturas: 15/01/2016
  • Inicio da campaña electoral: 18/01/2016
  • Votación electrónica:
    • Solicitude: 13/01/2016 a 19/01/2016
    • Recepción de votos: 20/01/2016 ata 6 horas antes da primeira convocatoria da Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día.
  • Votación por correo ordinario:
    • Solicitude: 24/12/2015 a 4/1/2016
    • Envío de papeletas: 15/01/2016 a 19/01/2016
    • Recepción de votos: 15/01/2016 ata 6 horas antes da primeira convocatoria da Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día.
  • Convocatoria de Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día: 15/01/2016
  • Celebración de Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día: 02/02/2016 a 09/02/2016

Para a votación electrónica só se admitirá o certificado dixital da FNMT.

Dende a actual Xunta Directiva animamos a todas as socias e socios a participar no proceso.

por marcos.chavarria el December 30, 2015 07:21 PM

Frogr 1.0 released

I’ve just released frogr 1.0. I can’t believe it took me 6 years to move from the 0.x series to the 1.0 release, but here it is finally. For good or bad.

Screenshot of frogr 1.0This release is again a small increment on top of the previous one that fixes a few bugs, should make the UI look a bit more consistent and “modern”, and includes some cleanups at the code level that I’ve been wanting to do for some time, like using G_DECLARE_FINAL_TYPE, which helped me get rid of ~1.7K LoC.

Last, I’ve created a few packages for Ubuntu in my PPA that you can use now already if you’re in Vivid or later while it does not get packaged by the distro itself, although I’d expect it to be eventually available via the usual means in different distros, hopefully soon. For extra information, just take a look to frogr’s website at live.gnome.org.

Now remember to take lots of pictures so that you can upload them with frogr 🙂

Happy new year!

por mario el December 30, 2015 04:04 AM

December 17, 2015

Improving disk I/O performance in QEMU 2.5 with the qcow2 L2 cache

QEMU 2.5 has just been released, with a lot of new features. As with the previous release, we have also created a video changelog.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="352" src="https://www.youtube.com/embed/lFgopoa9Rso?feature=oembed" width="625"></iframe>

I plan to write a few blog posts explaining some of the things I have been working on. In this one I’m going to talk about how to control the size of the qcow2 L2 cache. But first, let’s see why that cache is useful.

The qcow2 file format

qcow2 is the main format for disk images used by QEMU. One of the features of this format is that its size grows on demand, and the disk space is only allocated when it is actually needed by the virtual machine.

A qcow2 file is organized in units of constant size called clusters. The virtual disk seen by the guest is also divided into guest clusters of the same size. QEMU defaults to 64KB clusters, but a different value can be specified when creating a new image:

qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

In order to map the virtual disk as seen by the guest to the qcow2 image in the host, the qcow2 image contains a set of tables organized in a two-level structure. These are called the L1 and L2 tables.

There is one single L1 table per disk image. This table is small and is always kept in memory.

There can be many L2 tables, depending on how much space has been allocated in the image. Each table is one cluster in size. In order to read or write data to the virtual disk, QEMU needs to read its corresponding L2 table to find out where that data is located. Since reading the table for each I/O operation can be expensive, QEMU keeps a cache of L2 tables in memory to speed up disk access.

The L2 cache can have a dramatic impact on performance. As an example, here’s the number of I/O operations per second that I get with random read requests in a fully populated 20GB disk image:

L2 cache size Average IOPS
1 MB 5100
1,5 MB 7300
2 MB 12700
2,5 MB 63600

If you’re using an older version of QEMU you might have trouble getting the most out of the qcow2 cache because of this bug, so either upgrade to at least QEMU 2.3 or apply this patch.

(in addition to the L2 cache, QEMU also keeps a refcount cache. This is used for cluster allocation and internal snapshots, but I’m not covering it in this post. Please refer to the qcow2 documentation if you want to know more about refcount tables)

Understanding how to choose the right cache size

In order to choose the cache size we need to know how it relates to the amount of allocated space.

The amount of virtual disk that can be mapped by the L2 cache (in bytes) is:

disk_size = l2_cache_size * cluster_size / 8

With the default values for cluster_size (64KB) that is

disk_size = l2_cache_size * 8192

So in order to have a cache that can cover n GB of disk space with the default cluster size we need

l2_cache_size = disk_size_GB * 131072

QEMU has a default L2 cache of 1MB (1048576 bytes) so using the formulas we’ve just seen we have 1048576 / 131072 = 8 GB of virtual disk covered by that cache. This means that if the size of your virtual disk is larger than 8 GB you can speed up disk access by increasing the size of the L2 cache. Otherwise you’ll be fine with the defaults.

How to configure the cache size

Cache sizes can be configured using the -drive option in the command-line, or the ‘blockdev-add‘ QMP command.

There are three options available, and all of them take bytes:

There are two things that need to be taken into account:

  1. Both the L2 and refcount block caches must have a size that is a multiple of the cluster size.
  2. If you only set one of the options above, QEMU will automatically adjust the others so that the L2 cache is 4 times bigger than the refcount cache.

This means that these three options are equivalent:

-drive file=hd.qcow2,l2-cache-size=2097152
-drive file=hd.qcow2,refcount-cache-size=524288
-drive file=hd.qcow2,cache-size=2621440

Although I’m not covering the refcount cache here, it’s worth noting that it’s used much less often than the L2 cache, so it’s perfectly reasonable to keep it small:

-drive file=hd.qcow2,l2-cache-size=4194304,refcount-cache-size=262144

Reducing the memory usage

The problem with a large cache size is that it obviously needs more memory. QEMU has a separate L2 cache for each qcow2 file, so if you’re using many big images you might need a considerable amount of memory if you want to have a reasonably sized cache for each one. The problem gets worse if you add backing files and snapshots to the mix.

Consider this scenario:

Here, hd0 is a fully populated disk image, and hd1 a freshly created image as a result of a snapshot operation. Reading data from this virtual disk will fill up the L2 cache of hd0, because that’s where the actual data is read from. However hd0 itself is read-only, and if you write data to the virtual disk it will go to the active image, hd1, filling up its L2 cache as a result. At some point you’ll have in memory cache entries from hd0 that you won’t need anymore because all the data from those clusters is now retrieved from hd1.

Let’s now create a new live snapshot:

Now we have the same problem again. If we write data to the virtual disk it will go to hd2 and its L2 cache will start to fill up. At some point a significant amount of the data from the virtual disk will be in hd2, however the L2 caches of hd0 and hd1 will be full as a result of the previous operations, even if they’re no longer needed.

Imagine now a scenario with several virtual disks and a long chain of qcow2 images for each one of them. See the problem?

I wanted to improve this a bit so I was working on a new setting that allows the user to reduce the memory usage by cleaning unused cache entries when they are not being used.

This new setting is available in QEMU 2.5, and is called ‘cache-clean-interval‘. It defines an interval (in seconds) after which all cache entries that haven’t been accessed are removed from memory.

This example removes all unused cache entries every 15 minutes:

-drive file=hd.qcow2,cache-clean-interval=900

If unset, the default value for this parameter is 0 and it disables this feature.

Further information

In this post I only intended to give a brief summary of the qcow2 L2 cache and how to tune it in order to increase the I/O performance, but it is by no means an exhaustive description of the disk format.

If you want to know more about the qcow2 format here’s a few links:

Acknowledgments

My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the invaluable help of the QEMU development team.

Enjoy QEMU 2.5!

por berto el December 17, 2015 03:39 PM

December 13, 2015

The kernel ate my packets

Some time ago I had a problem with a server. It had two ethernet interfaces connected to different vlans. The main network traffic went via the default gateway in the first vlan, but there was a listening service in the other interface.

Everything was right until we tried to reach the second interface from another node out of the second vlan but near of this. It seemed there was not connection, but as I saw with tcpdump, the traffic arrived. It was a simple test, I ran a ping from the other node (10.1.2.55) and captured traffic in the second interface (10.10.1.62):

[root@blackdog ~]# tcpdump -w /tmp/inc-eth1-ping.pcap -i eth1
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
20 packets captured
20 packets received by filter
0 packets dropped by kernel
[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth1-ping.pcap
reading from file /tmp/inc-eth1-ping.pcap, link-type EN10MB (Ethernet)
01:35:15.751507 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 78, length 64
01:35:16.759271 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 79, length 64
01:35:17.767223 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 80, length 64
01:35:18.775153 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 81, length 64

So the ping packets arrived to the server but there was no answer via this interface. I captured traffic in the other interface but there was no answer either:

[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth0-ping.pcap |grep 10.1.2.55
[root@blackdog ~]#

Ok, that’s the cause:

[root@blackdog ~]# cat /proc/sys/net/ipv4/conf/all/rp_filter
1
And one solution:

[root@blackdog ~]# echo 2 > /proc/sys/net/ipv4/conf/all/rp_filter

So let’s see again the incoming packets at eth1:

[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth1-ping.pcap|grep 10.1.2.55
01:47:00.322056 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 1, length 64
01:47:01.323834 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 2, length 64
01:47:02.324601 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 3, length 64
01:47:03.325823 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 4, length 64

And the outgoing packets at eth0:

[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth0-ping.pcap|grep 10.1.2.55
01:47:18.969567 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 1, length 64
01:47:19.970800 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 2, length 64
01:47:20.969751 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 3, length 64
01:47:21.968764 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 4, length 64
01:47:22.968705 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 5, length 64

What happened here? As it says in this Red Hat note, the rp_filter kernel parameter got more strict than in previous kernel versions, so the “1” value has a different meaning. For example, in 2.6.16 kernel you can read in the documentation (/usr/share/doc/kernel-doc-2.6.18/Documentation/networking/ip-sysctl.txt):

        1 - do source validation by reversed path, as specified in RFC1812
            Recommended option for single homed hosts and stub network
            routers. Could cause troubles for complicated (not loop free)
            networks running a slow unreliable protocol (sort of RIP),
            or using static routes.

And in 2.6.32 kernels and more recent:

        1 - Strict mode as defined in RFC3704 Strict Reverse Path 
            Each incoming packet is tested against the FIB and if the interface
            is not the best reverse path the packet check will fail.
            By default failed packets are discarded.

Of course, you have another (more elegant) solution: using multiple routing tables

Thanks again to Rafa Serrada from HPE for giving me the trace for solving the problem :-)

el December 13, 2015 06:53 PM

November 26, 2015

Attending the Web Engines Hackfest

webkitgtk-hackfest-bannerIt’s certainly been a while since I attended this event for the last time, 2 years ago, when it was a WebKitGTK+ only oriented hackfest, so I guess it was a matter of time it happened again…

It will be different for me this time, though, as now my main focus won’t be on accessibility (yet I’m happy to help with that, too), but on fixing a few issues related to the WebKit2GTK+ API layer that I found while working on our platform (Endless OS), mostly related to its implementation of accelerated compositing.

Besides that, I’m particularly curious about seeing how the hackfest looks like now that it has broaden its scope to include other web engines, and I’m also quite happy to know that I’ll be visiting my home town and meeting my old colleagues and friends from Igalia for a few days, once again.

Endless Mobile logoLast, I’d like to thank my employer for sponsoring this trip, as well as Igalia for organizing this event, one more time.

See you in Coruña!

por mario el November 26, 2015 11:29 AM

November 16, 2015

GPUL Labs

Dende GPUL este ano queremos innovar un pouco na nosa planificación habitual de actividades polo que xa levamos un tempo a darlle voltas a unha nova forma de organización, coa idea de recuperar o P de Programadores do nome da asociación e tratar de volver a xerar ese sentimento de comunidade dentro do software libre da cidade da Coruña.

GPUL Labs

Este ano o plan de actividades de GPUL xirará entorno a un proxecto de desenvolvemento que comezaremos dende o principio de todo e ata onde nos leve o camiño, aprendendo primeiramente o básico dunha linguaxe como é Python así como os conceptos básicos de control de versións con un sistema moderno como GIT pero coa idea de avanzar polas diversas etapas que todo proxecto moderno de software debe superar.

Falaremos de metodoloxías áxiles de desenvolvemento, sistemas de integración continua para execución automática de tests, documentación con LaTeX, creación de APIs REST e outras cousas que vaian propoñendo todos os participantes.

Bótanos unha man

Plantexámonos este obxectivo ambicioso dende GPUL co fin de recuperar esa relación entre a comunidade informática que tanto se está a perder nos últimos anos e que queremos que sirva de trampolín para difundir o software libre entre dita comunidade, pero esta tarefa non a podemos facer solos.

PRECISAMOS A TÚA AXUDA!

Buscamos xente que nos bote unha man puntualmente para a organización dunha charla ou obradoiro, que nos axude a atopar poñente ou se controla do tema, que el mesmo poida ser o poñente :)

Tedes máis información no seguinte enlace, esperamos contar convosco! ;)

 

AdjuntoTamaño
flyer_1.png289.27 KB

por gpul el November 16, 2015 03:39 PM

November 07, 2015

Importing include paths in Eclipse

First of all, let me be clear: no, I’m not trying to leave Emacs again, already got over that stage. Emacs is and will be my main editor for the foreseeable future, as it’s clear to me that there’s no other editor I feel more comfortable with, which is why I spent some time cleaning up my .emacs.d and making it more “manageable”.

But as much as like Emacs as my main “weapon”, I sometimes appreciate the advantages of using a different kind of beast for specific purposes. And, believe me or not, in the past 2 years I learned to love Eclipse/CDT as the best work-mate I know when I need some extra help to get deep inside of the two monster C++ projects that WebKit and Chromium are. And yes, I know Eclipse is resource hungry, slow, bloated… and whatnot; but I’m lucky enough to have fast SSDs and lots of RAM in my laptop & desktop machines, so that’s not really a big concern anymore for me (even though I reckon that indexing chromium in the laptop takes “quite some time”), so let’s move on 🙂

However, there’s this one little thing that still bothers quite me a lot of Eclipse: you need to manually setup the include paths for the external dependencies not in a standard location that a C/C++ project uses, so that you can get certain features properly working such as code auto-completion, automatic error-checking features, call hierarchies… and so forth.

And yes, I know there is an Eclipse plugin adding support for pkg-config which should do the job quite well. But for some reason I can’t get it to work with Eclipse Mars, even though others apparently can use it there for some reason (and I remember using it with Eclipse Juno, so it’s definitely not a myth).

Anyway, I did not feel like fighting with that (broken?) plugin, and in the other hand I was actually quite inclined to play a bit with Python so… my quick and dirty solution to get over this problem was to write a small script that takes a list of package names (as you would pass them to pkg-config) and generates the XML content that you can use to import in Eclipse. And surprisingly, that worked quite well for me, so I’m sharing it here in case someone else finds it useful.

Using frogr as an example, I generate the XML file for Eclipse doing this:

  $ pkg-config-to-eclipse glib-2.0 libsoup-2.4 libexif libxml-2.0 \
        json-glib-1.0 gtk+-3.0 gstreamer-1.0 > frogr-eclipse.xml

…and then I simply import frogr-eclipse.xml from the project’s properties, inside the C/C++ General > Paths and Symbols section.

After doing that I get rid of all the brokenness caused by so many missing symbols and header files, I get code auto-completion nicely working back again and all those perks you would expect from this little big IDE. And all that without having to go through the pain of defining all of them one by one from the settings dialog, thank goodness!

Now you can quickly see how it works in the video below:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/16TJ1zopjeY" width="560"></iframe>
VIDEO: Setting up a C/C++ project in Eclipse with pkg-config-to-eclipse

This has been very helpful for me, hope it will be helpful to someone else too!

por mario el November 07, 2015 12:35 AM

November 05, 2015

Somebody has changed all the system permissions

I originally submitted this post to Docker people in the celebration of the 2015 Sysadmin Day, and they selected it as one of their favorite war stories. Now I publish it in my own blog.

Some time ago I was working as Linux sysadmin in a major company. Our team were in charge of the operating system, but other teams were the applications administrators. So in some circumstances we allowed them some privilleged commands via sudo. The could do some services installs/patching in this manner.

One day I received a phone call from one of our users. He said me there was a server with a erratic behaviour. I tried to ssh on it. Connection refused. I tried to log in from the console, and I only could see weird messages.

So I boot the server in rescue mode with a OS iso. I mounted the filesystems. And I began to see someone was changed all the permissioms in all the system. I investigated for a while, I could discover who was the guilty, and the command that executed, a sudo chmod -R something /

How we can recover the server in a situation like this? With previous steps (changing some permissions on hand, chrooting) we do it using the rpm database:

for p in $(rpm -qa); do rpm --setperms $p; done
for p in $(rpm -qa); do rpm --setugids $p; done
We had a SUSE server in our case, so I did an additional step:

/sbin/conf.d/SuSEconfig.permissions
And… of course, I never had this problem if the application was jailed in a Docker container (and the user that run the chmod in the State Prison ;-))

el November 05, 2015 07:46 PM

October 19, 2015

GPUL participa nas Xornada de boas prácticas con Software Libre nas ONGs

Este xoves 22 de outubro, GPUL estará presente na I Xornada de boas prácticas con Software Libre nas ONGs e Entidades de Acción Social que se celebran na Cidade da Cultura en Santiago de Compostela a partires das 16:30 horas. Nesta xornada o noso compañeiro Emilio J. Padrón González (@emiliojpg)  e Ana Vázquez Fernández da Coordinadora Galega de ONGD impartirán un relatorio titulado "Experiencia de colaboración no terceiro sector para a migración a Software Libre" no que explicará a experiencia da colaboración de GPUL na Migración a Software Libre na Coordinadora Galega de ONGDs.

I Xornada de boas prácticas con Software Libre nas ONGs e Entidades de Acción Social

O principal froito desa colaboración de GPUL con organizacións como a Coordinadora Galega de ONGDs ou Enxeñería Sen Fronteiras Galicia foi a migración dos sistemas de ambas organizacións a Software Libre, cos que agora están a traballar.

No relatorio presentarase como foi o proceso de migración, que necesidades é preciso cubrir neste tipo de organizacións e algúns dos principais retos que xurdiron no mesmo.

É relativamente habitual observar como desde eidos nos que se defende e promove o uso de tecnoloxías libres e abertas —tanto polo aforro en custos que a súa adopción pode supoñer a medio e longo prazo como, sobre todo, pola independencia e soberanía tecnolóxica que permiten e a ética detrás do seu modelo de desenvolvemento— non se predica co exemplo, empregando tecnoloxías privativas no desempeño dese labor de promoción. Isto é frecuente en moitas organizacións adicadas ao Terceiro Sector, que seguen a traballar acotío con sistemas e ferramentas non libres.

Temos en Galicia un bo feixe de asociacións sen ánimo de lucro cunha ampla experiencia no uso e estudo do Software Libre, clasicamente coñecidas como LUGS ou GLUGS, do inglés de GNU/Linux User Group. Neste relatorio presentamos a experiencia de colaboración dun dos GLUGS que máis tempo leva funcionando en Galicia, o GPUL, con dúas organizacións do Terceiro Sector, Enxeñería Sen Fronteiras (ESF) e a Coordinadora Galega de ONGs para o Desenvolvemento, ás que asesora e axuda na xestión e mantemento das súas TIC.

AdjuntoTamaño
xornadas-3sector-mini.png166.03 KB

por gpul el October 19, 2015 11:03 AM

October 10, 2015

Running Vagrant on OpenSUSE

Some weeks ago Fedora Magazine published a post about running vagrant in Fedora 22 using the libvirt provider. But if you try to repeat the procedure in OpenSUSE you’ll have to perform some different steps because currently there is not a vagrant package at OpenSUSE (I use 13.2).

So you will:

tsao@mylaptop :~> sudo zypper in ruby ruby-devel

tsao@mylaptop :~> sudo rpm -Uvh https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.rpm

tsao@mylaptop :~> sudo rpm -Uvh https://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1_x86_64.rpm

The most used virtualization provider for Vagrant is VirtualBox, so at this point you can run Virtalbox boxes if you was running VirtualBox vm before.

But, if you want to run libvirt boxes, you will do:

tsao@mylaptop :~> sudo zypper in qemu libvirt libvirt-devel qemu-kvm
tsao@mylaptop :~> vagrant plugin install vagrant-libvirt
tsao@mylaptop :~> systemctl enable libvirtd
tsao@mylaptop :~> systemctl start libvirtd
tsao@mylaptop :~> sudo usermod -a -G libvirt tsao

And, at this point, you can add and run Vagrant-libvirt boxes. Enjoy it :-)

Update, March 4th, 2016: Thanks to George J. Johnson for warning me about some typos.

el October 10, 2015 09:39 PM

October 07, 2015

Asamblea Xeral Extraordinaria de GPUL

Lugar: Aula 2.0a (Planta 2). Facultade de Informática da Coruña

Data: 14 de outubro de 2015

Primeira convocatoria: 19:30
Segunda convocatoria: 20:00

Orde do día:

- Lectura e aprobación, se procede, da Acta da Asemblea anterior.
- Lectura de altas e baixas de socios desde a última Asemblea.
- Lectura e aprobación, se procede, das Contas de 2014.
- Estado das Contas de 2015.
- Discusión e aprobación, se procede, de actividades a levar a cabo no ano 2016.
- Rogos e preguntas.

por gpul el October 07, 2015 09:21 PM

September 25, 2015

XI Xornadas de introducción a GNU/Linux e Software Libre para novos alumnos

E seguimos neste comezo de curso a tope, esta vez cun pequeno taller de introducción ao Software Libre e a GNU/Linux no que coma todos os anos, ensinarémosvos a todos os que o desexedes, os comandos básicos para traballar coa terminal en GNU/Linux e faremos unha pequena intro ao que é o software libre e porqué mola tanto.

O obradoiro terá lugar o próximo Martes 29 de Setembro no laboratorio 1.1 en dúas tandas:

Horario de maña: 12:00 - 13:30

Horario de tarde: 17:00 - 18:30

A entrada é totalmente libre e non é preciso apuntarse, así que esperámosvos!! :)

AdjuntoTamaño
intro_linux.png242.16 KB

por gpul el September 25, 2015 10:12 AM

September 19, 2015

Notes about time in UNIX and Linux systems (II): NTP

The second part of this post about time management I will write about the NTP and its daemon configuration. As I mentioned in the previous post, if you need a very accurate time the best option is using the ntp.org implementation of the protocol. If you need security over accuracy, then you can use OpenBSD project implementation. OpenNTPd is not a complete implementation of the protocol, but as usual in the OpenBSD software, it’s a good, well-documented, audited code.

NTP configuration

Tip: If you run GNU/Linux on virtual infraestructure, review the kernel boot parameters

Some years ago I had a problem with virtual machines that they weren’t able to syncrhonize with the NTP servers. The problem was solved reviewing this matrix at VMware.

Tip: Don’t forget opening the 123 port towards the NTP servers in your firewall.

There is a very simple /etc/ntp.conf file:

driftfile /var/lib/ntp/drift/ntp.drift # path for drift file
logfile   /var/log/ntp          # alternate log file
server server1
server server2

After “serverX” you can add some options on boot like iburst (RHEL6/7,SLES12) or dynamic (SLES11). These options help you to improve synchronization when the network is temporalily down and/or there is not name resolution.

Another interesting command is the driftfile, it helps to adjust the clock frequency on ntpd boot. Remember this file must be writtable by ntp user.

If you are configuring a SLES node, it’s easy to run yast. But maybe you are interested in doing a simple automated configuration, so you only want to touch the /etc/ntp.conf. You must disable NTP configuration at /etc/sysconfig/network/config, setting the policy parameter empty:

[...]
## Type:        string
## Default:     "auto"
#
# Defines the NTP merge policy as documented in netconfig(8) manual page.
# Set to "" to disable NTP configuration.
#
NETCONFIG_NTP_POLICY="auto"

## Type:        string
## Default:     ""
#
# List of NTP servers.
#
NETCONFIG_NTP_STATIC_SERVERS=""
[...]

As I said about configuring timezone in Exadata (RHEL5, 6?), the standard procedure is running /opt/oracle.cellos/ipconf tool.

But if you are tempted to reconfigure on /etc/ntp.conf and you make changes about ntp servers, you must restart the cellwall service after doing it. This is the firewall daemon enabled by default at the storage cells. When cellwall boots it scans /etc/ntp.conf file looking for the ntp servers in order to open the ports.

How to configure the NTP daemon

Tip: If you are running databases, you must use the slewing option (-x).

The slewing option is for avoiding abrupt time synchronizations. Time changes with great jumps are bad for db consistency, and very dangerous for some related services. As example, if you are running Oracle CRS and you have some seconds of error, you must stop all CRS processes (it’s not enough taking the node off the cluster) before making an on-hand NTP synchronization. If you don’t stop the CRS processes the synchronization can cause an outage.

SLES

The NTP daemon configuration is at /etc/sysconfig/ntp:

## Path:           Network/NTP
## Description:    Network Time Protocol (NTP) server settings
## Type:           string
## Default:        "-g -u ntp:ntp"
#
# Additional arguments when starting ntpd. The most
# important ones would be
# -u user[:group]   to make ntpd run as a user (group) other than root.
#
NTPD_OPTIONS="-g -u ntp:ntp"

## Type:           yesno
## Default:        yes
## ServiceRestart: ntp
#
# Shall the time server ntpd run in the chroot jail /var/lib/ntp?
#
# Each time you start ntpd with the init script, /etc/ntp.conf will be
# copied to /var/lib/ntp/etc/.
#
# The pid file will be in /var/lib/ntp/var/run/ntpd.pid.
#
NTPD_RUN_CHROOTED="yes"

## Type:           string
## Default:        ""
## ServiceRestart: ntp
#
# If the time server ntpd runs in the chroot jail these files will be
# copied to /var/lib/ntp/ besides the default of /etc/{localtime,ntp.conf}
#
NTPD_CHROOT_FILES=""

[...]

## Type:           boolean
## Default:        "no"
#
# Force time synchronization befor start ntpd
#
NTPD_FORCE_SYNC_ON_STARTUP="yes"

[...]

There are more options, but I think these are the most interesting: the ntpd options (there you can include the -x slewing option), chrooting (it improves the security of the daemon), and hard synchronization before booting the daemon.

If there is a difference between the current time in the machine and ntp servers larger than the tinker panic parameter sets (1000 secs by default), ntpd exits with error. But if you add the -g option means the daemon will synchronize on boot regardless the jump (only once at boot).

Be careful with NTPD_FORCE_SYNC_ON_STARTUP, your sensitive applications must boot after ntp to avoid time jumps.

It can be interesting too to enable the option NTPD_FORCE_SYNC_HWCLOCK_ON_STARTUP (if you enabled the last one), in order to have an accurate time at the hardware clock. Remember that’s the time the operating system takes on boot before starting the NTP daemon.

As you can see, in SLES chrooting is active by default. Remember this option needs some copied files in /var/lib/ntp and /proc bind mounted in the jail. Sometimes I use mondorescue for bare metal recovery, and I experienced some issues when I didn’t avoid the ntp jail in the backup.

After the daemon configuration, you have some options to run the daemon:

root@SLES10_or_11:~ # rcntp start
root@SLES12:~ # systemctl start ntpd
root@SLES10_11_12:~ # service ntp start 

Don’t forget to enable the daemon by default on OS boot:

root@SLES10_or_11:~ # chkconfig ntp 35 
root@SLES12:~ # systemctl enable ntpd 

RHEL

The RHEL config file /etc/sysconfig/ntpd is less documented by default than SLES one. This is the RHEL6 file:

# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid -g"

With the -x option (or if you added servers in /etc/ntp/step-tickers) the daemon won’t try to synchronize before booting the daemon. So, in RHEL6 if you want to do a hard sync before booting the ntpd, you must enable the ntpdate daemon too.

It’s a good idea to add the SYNC_HWCLOCK=yes to /etc/sysconfig/ntpd (or /etc/sysconfig/ntpdate if you enable ntpdate daemon) as we did with NTPD_FORCE_SYNC_HWCLOCK_ON_STARTUP option in SLES.

In RHEL7 the use of ntpdate is deprecated in this way, and it is used as time-sync.target provider like sntp. In the documentation, Red Hat advises to add After=time-sync.target in your sensitive services in order to avoid important jumps with the inital synchronization with these tools.

ntpd chrooting is disabled by default in RHEL. I found a procedure for RHEL6, it’s not automagic than SLES. You must:

And… after the configuration, you can enable and start the daemon:

root@RHEL5_or_6:~ # chkconfig ntpd on 
root@RHEL7:~ # systemctl enable ntpd 

root@RHEL5_or_6:~ # service ntpd start 
root@RHEL7:~ # systemctl start ntpd 

HP-UX

In HP-UX 11.31 coexists xntpd (by HP) and ntpd (free software) implementations. xntpd is not supported after April 1, 2014.

There is a configuration called /etc/rc.config.d/netdaemons. As you guess, you will find (x)ntpd daemon configuration there:

[...]
XNTPD_NAME=ntpd
export NTPDATE_SERVER=
export XNTPD=1
export XNTPD_ARGS="-x"
[...]

In order to enable the service, you can activate editing the file and setting XNTPD=1. The other way is running

root@myHPUX:/# ch_rc -a -p XNTP=1 
root@myHPUX:/# ch_rc -l -p XNTP   # show the status of xntp service on boot
And you start/stop the daemon in the classic way:
root@myHPUX:/# /sbin/init.d/xntpd start

AIX

In AIX the NTP daemon is enabled at the /etc/rc.tcpip with the main OS network daemons.

[...]
# Start up Network Time Protocol (NTP) daemon
start /usr/sbin/xntpd "$src_running" "-x"
[...]

As you can see, I added the -x option there. I could do it too in this way:

[root@myAIX /]# chssys -s xntpd -a "-x" # add the slewing option

[root@myAIX /]# chrctcp -S -a xntpd # -S start and -a enable the service

Start and check the xntpd status:

[root@myAIX /]# startsrc -s xntpd
[root@myAIX /]# lssrc -ls xntpd # check the service

Updated November 5th, 2015: If you upgrade from SLES11SP3 to SLES11SP4 and you have your ntpd chrooted, you will have a problem with the name resolution of the NTP servers. The cause is the update to ntpd > 4.2.7. You can fix it copying the needed files to the jail, but SUSE provided a /etc/ntp.conf default file with the needed options for backward compatibility doing nothing else.

el September 19, 2015 09:50 PM

September 15, 2015

Install party GNU/Linux

O próximo 24 de setembro, imos a colaborar dende GPUL coa Oficina de Software Libre do CIXUG para organizar un obradoiro de instalación de GNU/Linux para estudantes da Facultade de Informática da Universidade da Coruña.

O obxectivo do taller é a instalación e configuración do sistema operativo Ubuntu 12.04, que se atopa dispoñible nos laboratorios de prácticas da propia FIC.

Ademais, darase a coñecer as características máis importante do Software Libre, do sistema operativo instalado, e resolveranse todas aquelas preguntas que poidan ter sobre o tema.

O evento dará comezo ás 16:30 horas na aula 0.5w.

O acceso ao taller realizarase previa inscrición ata completar o aforo da aula (25 persoas):

http://osl.cixug.es/taller-de-instalacion-de-gnulinux-na-facultade-de-informatica-da-udc/

Correde apuntarvos!

AdjuntoTamaño
cartel_impresion.png238.18 KB

por gpul el September 15, 2015 01:53 PM

August 25, 2015

Examen de radioaficionado

A veces pienso que me gustaría escribir un "manual de radioaficionado" en español, porque no he encontrado mucho material por el estilo cuando quería prepararme para el examen de la licencia de radioaficionado española, y creo que al menos la mitad de la diversión inherente en aprender algo está en enseñárselo a otras personas. Aún así, eso sería un trabajo enorme y tardaría mucho tiempo en completarlo. Como de momento no tengo tiempo para ello, de momento he decidido preparar un miniexamen de ejemplo con el tipo de preguntas que podéis encontraros en el examen. Espero que os sea útil.

Examen de radioaficionado

1. En la banda de 10 metros, con modulación de banda lateral única, ¿cuál es el mayor ancho de banda permitido?
  A. 10 metros.
  B. Depende de si la banda es municipal o militar.
  C. Todo el disponible entre "Valencia" e "Islas Canarias".

2. ¿Para cuál de las siguientes funciones no puede utilizar un transistor?
  A. Conmutador.
  B. Mezclador.
  C. Escuchar el fútbol y los toros.

3. Para un transformador con 50 vueltas en el primario y 200 en el secundario, ¿cuál es la razón entre la impedancia de entrada y la de salida?
  A. Es una sinrazón.
  B. Más que cero y menos que infinito.
  C. ¿Por qué nos empeñamos en querer saber la razón, y no dejamos que el transformador haga libremente lo que quiera?

4. ¿Cuál es el límite permitido para las emisiones no deseadas?
  A. Depende de la frecuencia. Por ejemplo, todos los días sería pasarse.
  B. 35 decibelios de día y 30 de noche, medidos con las ventanas cerradas.
  C. Viendo la mierda que echan por la tele todos los días, mayor del que pensaba.

5. Dos personas situadas a 2500 km de distancia quieren comunicarse a mediodía en la cresta del ciclo solar. ¿Qué banda deberían utilizar?
  A. La banda de 2500 km.
  B. La banda de gaitas de la diputación de Ourense.
  C. La banda ancha de Internet.

6. ¿En qué distrito español se engloban las provincias de Barcelona, Girona, Lleida y Tarragona?
  A. El 3.
  B. ¡Número 1! ¡Siempre número 1!
  C. Pregúnteme el año que viene e igual la respuesta le sorprende.

7. ¿Cuál es el patrón de radiación de una antena Yagi de 4 elementos horizontales a 15 metros sobre el nivel del suelo y paralela a éste?
  A. ¿Radiación, dice usted?
  B. Mi madre querida, ¿en serio ha dicho radiación?
  C. Alta ganancia hacia el frente con nulos laterales y una relación de... ¿pero de verdad que ha dicho radiación?

8. ¿Cuál de las siguientes es una buena práctica a emplear con los repetidores?
  A. Decirles que son unos fracasados por repetir curso.
  B. No se me han ocurrido otras opciones graciosas para poner aquí.

9. ¿Qué es la frecuencia crítica?
  A. Una frecuencia que es incapaz de hacer nada propio pero igual opina sobre lo que hacen los demás.
  B. La frecuencia por debajo de la cual uno no se baña suficientemente a menudo.
  C. Probablemente uno de esos programas de tertulia de la radio.

10. ¿De qué se compone el código Morse?
  A. De puntos y rayas.
  B. De pitos y flautas.
  C. De M, O, R, S y E.

por jacobo el August 25, 2015 06:00 AM

August 14, 2015

I/O limits for disk groups in QEMU 2.4

QEMU 2.4.0 has just been released, and among many other things it comes with some of the stuff I have been working on lately. In this blog post I am going to talk about disk I/O limits and the new feature to group several disks together.

Disk I/O limits

Disk I/O limits allow us to control the amount of I/O that a guest can perform. This is useful for example if we have several VMs in the same host and we want to reduce the impact they have on each other if the disk usage is very high.

The I/O limits can be set using the QMP command block_set_io_throttle, or with the command line using the throttling.* options for the -drive parameter (in brackets in the examples below). Both the throughput and the number of I/O operations can be limited. For a more fine-grained control, the limits of each one of them can be set on read operations, write operations, or the combination of both:

Example:

-drive if=virtio,file=hd1.qcow2,throttling.bps-write=52428800,throttling.iops-total=6000

In addition to that, it is also possible to configure the maximum burst size, which defines a pool of I/O that the guest can perform without being limited:

One additional parameter named iops_size allows us to deal with the case where big I/O operations can be used to bypass the limits we have set. In this case, if a particular I/O operation is bigger than iops_size then it is counted several times when it comes to calculating the I/O limits. So a 128KB request will be counted as 4 requests if iops_size is 32KB.

Group throttling

All of these parameters I’ve just described operate on individual disk drives and have been available for a while. Since QEMU 2.4 however, it is also possible to have several drives share the same limits. This is configured using the new group parameter.

The way it works is that each disk with I/O limits is member of a throttle group, and the limits apply to the combined I/O of all group members using a round-robin algorithm. The way to put several disks together is just to use the group parameter with all of them using the same group name. Once the group is set, there’s no need to pass the parameter to block_set_io_throttle anymore unless we want to move the drive to a different group. Since the I/O limits apply to all group members, it is enough to use block_set_io_throttle in just one of them.

Here’s an example of how to set groups using the command line:

-drive if=virtio,file=hd1.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd2.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd3.qcow2,throttling.iops-total=3000,throttling.group=bar
-drive if=virtio,file=hd4.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd5.qcow2,throttling.iops-total=3000,throttling.group=bar
-drive if=virtio,file=hd6.qcow2,throttling.iops-total=5000

In this example, hd1, hd2 and hd4 are all members of a group named foo with a combined IOPS limit of 6000, and hd3 and hd5 are members of bar. hd6 is left alone (technically it is part of a 1-member group).

Next steps

I am currently working on providing more I/O statistics for disk drives, including latencies and average queue depth on a user-defined interval. The code is almost ready. Next week I will be in Seattle for the KVM Forum where I will hopefully be able to finish the remaining bits.

I will also attend LinuxCon North America. Igalia is sponsoring the event and we have a booth there. Come if you want to talk to us or see our latest demos with WebKit for Wayland.

See you in Seattle!

por berto el August 14, 2015 10:22 AM

August 11, 2015

Notes about time in UNIX and Linux systems (I): time zones

I decided to write about handling the time at UNIX/Linux systems because a message like this:

[29071122.262612] Clock: inserting leap second 23:59:60 UTC

I have similar logs in my servers last June 30, 2015. Of course, I was aware about it some months before and I have to do some work to be ready (kernel/ntpd upgrades depending on the version of the package, we work with 8 main releases of 3 GNU/Linux distributions). In previous leap seconds some issues affected to Linux servers all around the world. As I was praying for it, nothing happened after the leap second insertion, and I went to sleep deeply. But that has been one of the rare situations in which having different tiers (development/integration/staging/production) doesn’t means nothing because you test all the environments at the same time.

Ok, let’s go. We need an accurate time for a server. It’s important -specially, in database servers. So we will use the NTP daemon. In RHEL7 we could use chronyd, but the recomendation for servers with a stable time source is using ntpd.

But, of course, if we didn’t it before (maybe in the OS installation) we will need to adjust the time zone.

GNU/Linux

In previous RHEL/SLES major releases we must edit /etc/sysconfig/clock:

TIMEZONE="Europe/Madrid"
UTC=true
The meaning of the first option is clear (and we see our possibilities at /usr/share/zoneinfo). The second option points out the hardware clock has UTC configuration.

But this configuration requires rebooting the node, and sometimes it’s not possible. So, in order to take effect imediately, we run this command:

root@tardis:~ # ln -s /usr/share/zoneinfo/Europe/Madrid /etc/localtime

In SLES of course you can use YaST for this task too.

If we are working on RHEL7/SLES12, we need to deal with Skyne^W^W systemd. And it’s easy, we only need running:

root@tardis:~ # timedatectl set-timezone Europe/Madrid
root@tardis:~ # timedatectl
      Local time: mar 2015-08-11 16:29:00 CEST
  Universal time: mar 2015-08-11 14:29:00 UTC
        RTC time: mar 2015-08-11 14:29:00
       Time zone: Europe/Madrid (CEST, +0200)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  dom 2015-03-29 01:59:59 CET
                  dom 2015-03-29 03:00:00 CEST
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  dom 2015-10-25 02:59:59 CEST
                  dom 2015-10-25 02:00:00 CET
In these versions, we can read the timezones list:

root@tardis:~ # timedatectl list-timezones
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
Africa/Algiers
Africa/Asmara
Africa/Bamako
...

There is a peculiar case with the Oracle Exadata product. The first time you face this hardware+software stack, maybe will be tempted to manage it like another RHEL server. But if you read the Exadata documentation (you have the PDF’s in /usr/share/doc…) you’ll see there are additional consistency layers (so be careful installing packages from other distributions ;-)).

For example, at the storage cells you will stop the services, and run the /opt/oracle.cellos/ipconf utility. When you finish the changes are reflected at the usual config files and at /opt/oracle.cellos/cell.conf

...
  <Ntp_drift>/var/lib/ntp/drift</Ntp_drift>
  <Ntp_servers>ntpserver1</Ntp_servers>
  <Ntp_servers>ntpserver2</Ntp_servers>
  <System_active>non-ovs</System_active>
  <Timezone>Europe/Madrid</Timezone>
  <Version>12.1.2.1.0</Version>
...
</Cell>

(It’s a 11.2 cell xml configuration file; the new releases have a different sintax)

At the compute nodes the configuration change must be done at /etc/sysconfig/clock, but you must stop and disable the crs before the change.

Here is the full configuration guide for Exadata 12c RC1 components.

HP-UX

In HP-UX 11.11, 11.23 and 11.31 the timezone configuration resides at the /etc/TIMEZONE script:

TZ=MET-1METDST
export TZ
You can edit the file or run:

set_parms timezone

There are two parameters, timezone and dst at kernel level you can touch for legacy applications. They are no longer used.

AIX

In AIX the standard method for configurations is using smit. So you can run smitty and go to the System Environments menu. The changes are reflected in the file /etc/environment

You will get noticed in AIX 5.3 the configuration is a bit more complex. In this version you must configure the DST, etc. There is a guide at IBM’s web.

In the next chapter I will cover ntpd administration.

el August 11, 2015 02:20 PM

July 07, 2015

Old habits, new times

Today I begin a new blog.

This will be my third project. More than 12 years ago three friends began linuxbeat.net. Juanjo, Cañete and me wrote about technology, the University where we were studying, politics… That was the age when people socialised at blog level, you could trace a social network following the links in the blogs to other blogs.

Most of them were written in free services like Blogspot, Photoblog… People left behind the unconfortable, ugly, poorly updated static pages of the 90’s, and new hobbyists and experts in different areas (but with no idea on web developing) began to write and enrich the Wide World Web.

But as we were technology fanboys (we were active members of GPUL, the Coruña Linux users group), we rent a spanish hosting, and we installed and configured our Wordpress via ssh.

In 2005 I launched my weblog alone. My domain was enelparaiso.org, there I built my personal quasi-static page (it was generated by a Wikka Wiki engine), and a blog (Wordpress again). I spent some weeks until I found a cheap hosting in Canada that allowed ssh administration.

At umask 077 The Flight of an Albatross I wrote 393 posts on Tech, Civil Engineering, Philosphy, Politics, Solidarity, Religion, Jazz, Poetry… The rhythm I wrote was high in the first years, but as my jobs were more and more demanding, I progressively abandoned the blog. It happened at the same time I began to use the modern social networks: Facebook, G+ (do you remember Orkut?), Identi.ca, Twitter, Diaspora…

I use social networks today like I used the blog before: I vent my thoughts, and I maintain communication with friends and family. So, why begin a new blog?

A blog is a perfect opportunity to procrastinate put my thoughs in order. Nowadays I have a very demanding work. Sometimes I have to delay investigating/improving methods or procedures because my daily workload. So I’ll try to force myself to stop at least once in a week to write about my job, sharing my experiences.

And as it’s a new age, the infraestructure under will be diferent. This new web is hosted in a AWS EC2 instance. And, as in the age of cloud computing we have to improve performance, I will go back to use static pages. Of course you will see they are a bit prettier than the pure html pages we wrote in the 90’s. Now I use hugo, a static web generator written in Go, with the theme hugo-uno, written by Fredrik Loch.

I wish it will be useful to you :-)

el July 07, 2015 08:41 PM

July 03, 2015

On Linux32 chrooted environments

I have a chrooted environment in my 64bit Fedora 22 machine that I use every now and then to work on a debian-like 32bit system where I might want to do all sorts of things, such as building software for the target system or creating debian packages. More specifically, today I was trying to build WebKitGTK+ 2.8.3 in there and something weird was happening:

The following CMake snippet was not properly recognizing my 32bit chroot:

string(TOLOWER ${CMAKE_HOST_SYSTEM_PROCESSOR} LOWERCASE_CMAKE_HOST_SYSTEM_PROCESSOR)
if (CMAKE_COMPILER_IS_GNUCXX AND "${LOWERCASE_CMAKE_HOST_SYSTEM_PROCESSOR}" MATCHES "(i[3-6]86|x86)$")
    ADD_TARGET_PROPERTIES(WebCore COMPILE_FLAGS "-fno-tree-sra")
endif ()

After some investigation, I found out that CMAKE_HOST_SYSTEM_PROCESSOR relies on the output of uname to determine the type of the CPU, and this what I was getting if I ran it myself:

(debian32-chroot)mario:~ $ uname -a
Linux moucho 4.0.6-300.fc22.x86_64 #1 SMP Tue Jun 23 13:58:53 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux

Let’s avoid nasty comments about the stupid name of my machine (I’m sure everyone else uses clever names instead), and see what was there: x86_64.

That looked wrong to me, so I googled a bit to see what others did about this and, besides finding all sorts of crazy hacks around, I found that in my case the solution was pretty simple just because I am using schroot, a great tool that makes life easier when working with chrooted environments.

Because of that, all I would have to do would be to specify personality=linux32 in the configuration file for my chrooted environment and that’s it. Just by doing that and re-entering in the “jail”, the output would be much saner now:

(debian32-chroot)mario:~ $ uname -a
Linux moucho 4.0.6-300.fc22.x86_64 #1 SMP Tue Jun 23 13:58:53 UTC 2015
i686 i686 i686 GNU/Linux

And of course, WebKitGTK+ would now recognize and use the right CPU type in the snippet above and I could “relax” again while seeing WebKit building again.

Now, for extra reference, this is the content of my schroot configuration file:

$ cat /etc/schroot/chroot.d/00debian32-chroot
[debian32-chroot]
description=Debian-like chroot (32 bit) 
type=directory
directory=/schroot/debian32/
users=mario
groups=mario
root-users=mario
personality=linux32

That is all, hope somebody else will find this useful. It certainly saved my day!

por mario el July 03, 2015 01:31 PM

June 05, 2015

Los comienzos son duros... y divertidos

Muchos sabréis ya, que GPUL se fundó en 1998. Como los frik^M^M miembros de tantos otros LUGs (Linux Users Groups) que nacieron en aquella época, vivimos una época muy especial, en la que Internet iba llegando poco a poco y con un irrisorio ancho de banda a nuestros hogares. Entonces, nos veíamos mucho más las caras, y también las listas de correo tenían una actividad salvaje.  Había discusiones técnicas, ayuda mutua, nuevas ideas, experimentos más o menos exitosos... Mucha de esa actividad se vivía en el despacho 0.05, en la planta -1 de la Facultad de Informática de la UDC.

A finales de 2006 las cosas habían cambiado mucho para la mayoría de nosotros. Gente que empieza a trabajar, otros que seguimos intentando acabar la carrera, entran nuevas generaciones... y alguien desde las altas esferas decide que ya va siendo hora de que dejemos el despacho a un profesor, y nos vayamos a incordiar a otro lado :-D Ese otro lado es el despacho que hoy conocéis, que, pese a estar bajo la escalera de emergencia y al lado de los baños masculinos :-P, tiene la ventaja de que caben bastantes más cosas, lo que nos ha venido muy bien para montar eventos mucho más grandes y profesionales, como la GUADEC2012 o la próxima Akademy 2015.

El caso es que, antes de recoger nuestros bártulos y abandonar nuestro despacho de toda la vida, tuve la idea de contar allí mismo cómo habían sido aquellos primeros tiempos, a través de la voz de sus protagonistas. Hay unas cuantas decenas de horas de grabaciones caseras, con ruido de fondo de los ventiladores de los servidores, con iluminación variable... hechas a prisas, a finales de aquel 2006. Mi idea original era hacer una única película-documental; intentaría tenerla para el décimo aniversario de la asociación, pero por circunstancias la cosa se fue postergando *mucho*.

Hace algún tiempo tuvimos la suerte de conocer a Brân González Patiño, al que secuest^M^M convertimos en nuestro experto en audiovisuales, por su formación y trabajos profesionales. Brân no sólo aportó la técnica: tuvo la idea de transformar esas polvorientas grabaciones en una serie documental, le dotó el ritmo adecuado, y la necesaria perspectiva distante, para que no se quedara sólo en un elemento de nostalgia para nosotros. Pretendemos contar una historia local, pero que a la vez resulte universal, de tal manera que personas que no nos conocen pero vivieron experiencias similares se sientan rápidamente identificadas. De ahí su título: «GPUL: historia de un LUG cualquiera»

Iremos actualizando este post con los episodios según se vayan publicando. Que lo disfrutéis :-)

Capítulo 1: El nacimiento de GPUL

<iframe frameborder="0" height="480" src="https://archive.org/embed/historia_capitulo1" width="640"></iframe>


Capítulo 2: El despacho 0.05

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitulo2" width="640"></iframe>


Capítulo 3: Experimentando con Linux

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitulo3" width="640"></iframe>


Capítulo 4: Cambio de ciclo

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitul4" width="640"></iframe>

Episodio resumen

<iframe frameborder="0" height="480" src="https://archive.org/embed/Gpul_historia_resumen" width="640"></iframe>

 

Episodio 5: Nuevos tiempos

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitulo5" width="640"></iframe>

 

 

por tsao el June 05, 2015 03:21 PM

May 27, 2015

Mirando (mucho) atrás...

Ya tenemos el trailer de la serie documental sobre la historia de GPUL. Y en unos días... el primer episodio :-)

 

<iframe frameborder="0" height="480" src="https://archive.org/embed/trailer_historiadeunlugcualquiera" width="640"></iframe>

por tsao el May 27, 2015 09:37 PM

May 25, 2015

La palabra es “restomod”

“Restomod”. Es una palabra que no conocía hasta hace poco, y que tiene mucho que ver con los coches que pongo por aquí.

Qué es un restomod

“Restomod” es un acrónimo que viene de restored y modified (o modernized, según dónde preguntes). No hay una definición precisa, pero el consenso viene a ser éste: se considera restomod un coche clásico que, manteniendo un aspecto muy cercano al original, ha sido modificado con componentes modernos para hacerlo más seguro y/o mejorar sus prestaciones.

Según el libro How to build Ford Restomod street machines,  quien acuñó el término fue Jim Smart, de la revista Mustangs & Ford Machines en 1995. Le ayudó a ello Ron Bramlett, dueño de la tienda Mustangs Plus, que en 2001 patentó la palabra.

Durante los años 80 la moda entre los dueños de Mustangs era dejarlos tal y como habían salido de fábrica. La caza de componentes originales era su mayor afición. Pero en los 90 empezaron a surgir “herejes” que añadían a sus coches modificaciones adicionales. Al principio eran sólo modificaciones que se podían deshacer fácilmente, por si había que volver a dejar el coche como estaba. Los productos de estos cambios se llamaron restomods, coches restored pero modified. Poco a poco las modificaciones se hicieron más radicales: nuevos motores, nuevas ruedas, cambios de asientos e interior, nuevos sistemas de sonido … Los coches seguían siendo Mustangs, pero ya no se podían considerar clásicos. Aunque habían sido restaurados, también habían sido modificados más allá de lo que los ingenieros de Ford habían planeado.

El ejemplo que hizo el término conocido para el gran público fue uno que ya ha salido por aquí: Eleanor, el Mustang de 60 segundos.

Que, en realidad, no es un restomod. Este coche es un modelo moderno construido expresamente para la película. La película “60 segundos” protagonizada por Nicholas Cage es un remake de otra de 1974, del mismo nombre (Gone in 60 seconds), en la que sale el “Eleanor” original:

“Eleanor” de 1974 (fuente: ford-life.com)

¿Cuánto mod hace falta para un restomod?

El tema de la mayor parte de discusiones sobre restomods en Internet es cuánto hace falta modificar un coche clásico para considerarlo un restomod. ¿Llega con cambiar las ruedas? ¿Hay que cambiar el motor? ¿Vale un kit que sólo cambia algo de la apariencia del coche, como los retrovisores o la parrilla del radiador?

Jay Leno, dueño de una de las mayores colecciones de coches clásicos, decía en un artículo para Popular Mechanics:

Take my two 1925 Doble steam cars. They weigh 6000 pounds and move pretty well but only have rear brakes. That’s insane. I put brake drums on the front, with Corvette disc brakes hidden inside them. Now I can comfortably drive my Dobles, because they reliably stop.

Éste es un caso curioso de restomodding, porque el coche del que habla Leno es un coche de 1925 … ¡a vapor! Éste, en concreto:

Jay Leno en su 1925 Doble Model E Roadster (fuente: www.classiccarsblog.com)
Jay Leno en su 1925 Doble Model E Roadster (fuente: www.classiccarsblog.com)

Sin embargo, el coche mantiene su aspecto original, salvo porque se ven los frenos de disco delanteros. Es un restomod de libro.

Pero Jay Leno también tiene coches más convencionales a los que aplica otras modificaciones:

I went much further with my just-restored Ford Galaxie. While it looks completely original, it’s an all-new car underneath. The suspension now moves with improved trailing arms, a Panhard rod to limit rear-axle sway, oversize antiroll bars, beefed-up mounting brackets and stiffer, polyurethane bushings, all from a suspension company called Hotchkis. The sloppy recirculating-ball steering was replaced with a precise rack-and-pinion setup. Wilwood cross-drilled and vented disc brakes grace all four corners. In the engine room, there’s a fuel-injected 511-cubic-inch Jack Roush V8 backed by a Tremec six-speed gearbox. We wrapped the old pieces in paper and put them on a shelf in case we ever want to return the car to its original condition.

Como se puede leer en la última línea, Leno conserva la posibilidad de revertir el coche a su estado original si hiciera falta. Y de hecho, su Ford Galaxie no parece un restomod hasta que levantas el capó, como se ve en el vídeo a continuación.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="https://www.youtube.com/embed/V-BL7G5m98M?feature=oembed" width="500"></iframe>

Restomods con más mod que resto

Hay otros casos en los que no es así, porque las modificaciones que se han hecho son demasiado grandes. Para esos tipos de modificaciones hay otros nombres: “Pro-Touring”, “Pro-Street”, “Rat Rod”, “Hot Rod” …

Pero ésos me los guardo para otro día. Por hoy, quédense con que la palabra es “restomod”.

Referencias

por xouba el May 25, 2015 08:05 PM

May 23, 2015

New album of Diffie-Hellman! xDIncludes the hit LogJam !!!



New album of Diffie-Hellman! xD

Includes the hit LogJam !!!

por amhairghin el May 23, 2015 07:27 PM

May 19, 2015

Google I/O

Os compañeiros do GDG Coruña están a montar unha boa para o gran evento anual de Google, o Google I/O e en GPUL tamén queremos colaborar xunto coa asociación Árticos!

Durante os días 28 e 29 de Maio imos emitir en streaming diversas charlas do evento, debates, obradoiros e grupos de traballo para celebrar o evento para desenvolvedores do ano de Google.

Tedes máis información na páxina do GDG, o evento terá lugar o día 28 de 17:00 a 23:00 e o 29 en horario de maña, aínda por definir.

O primeiro día imos ver a Keynote en directo, faremos un Hackathon para desenvolver aplicacións móbiles para o escornabot - (www.escornabot.com) e veremos outras charlas sobre Google TV, Android Wear, etc.

O segundo día imos seleccionar algunha charla e comentala xunto con outros obradoiros e eventos.

É importante, aínda que non necesario, se estades interesados en asistir que vos apuntedes no evento para preever a asistencia e programar mellor o evento.

AdjuntoTamaño
cartel_impresion.png185.12 KB
liberar o teu traballo.png245.19 KB

por gpul el May 19, 2015 06:20 PM

May 17, 2015

The Galician's dilemma

Today's the day that commemorates Galician literature, the "Día das Letras Galegas", so it's obviously time to write about more Galician weird stuff. This is something you'll encounter if you share a meal with Galicians.

Let's first set the scene: you are having lunch, or perhaps dinner, in Galicia, with Galicians. As Galicians are wont to do, multiple serving trays are brought to the table, and everybody takes from them whatever they'd like to eat. After a couple of hours, the table is full of serving trays, all of which have one morsel left. Around the table, many Galicians talk and joke, trying to appear nonchalant while they eye the left-over portions of food greedily, obviously wanting to eat them. Yet they never touch them.

This situation is called "a vergoña do galego", which can be translated literally as "the Galician's shame", but I think a better translation would be "the Galician's dilemma". It goes like this:

Initially, the serving trays are full of food, and they circulate around the table so everyone can take a portion commensurate to how hungry they are, how much they like that particular food, and now many other trays full of food they expect to see during the meal. At the end of the first round, anyone who wants seconds can just call for a tray and serve themselves. However, as the amount of food in each tray diminishes, a secondary consideration starts to take hold: "what if someone else wants this food too?" So, when they go for seconds, or thirds, people will usually serve themselves less food than they'd actually like, so that there's still enough for someone else who may want it.

This situation reaches its logical conclusion when there's only one portion left in the serving tray. At this point, the desire to eat the food is less powerful than the dread of depriving someone else from eating that morsel. As a result, multiple trays will be on the table, each one displaying a single morsel of food that somebody wants to eat and nobody dares to touch. This situation often reach ridiculous levels, where you could have trenchers with only one solitary slice of octopus, or dishes displaying one piece of raxo and one potato chip.

Galicians recognize and acknowledge this phenomenon, so they've developed some coping strategies. For example, at a restaurant, when a waiter needs to remove the serving trays, they'll just choose one of the diners and have a conversation like this:

"How did you like the octopus?"
"Ah, it was wonderful."
"So you won't mind finishing it up for me, I need to take the trencher away." (Removes last slice of octopus from trencher, puts it into Galician's plate, takes trencher away.)

At this point the dilemma is solved, because it was the waiter, not you, who put the food in your plate. What can you do about it? Nothing, of course. May as well just eat it.

Another solution to the dilemma involves having the presence of someone who is not Galician. Non-Galicians are exempted from the dilemma, and not only are they allowed to take the last morsel without fear of repercussion, they will actually be encouraged to.

"Ah, only one portion of empanada left!"
"Yes, this is the Galician's dilemma." (Explanation of the dilemma follows.)
"But you are not Galician, so it doesn't affect you, so just take it!"

Savvy non-Galicians may even just go ahead unprompted and cut the Gordian knot of the Galician dilemma:

"Is this the last prawn?"
"Yes, it is."
"Oh well, I'm not Galician, so..." (Takes it.)

Galicians being cognizant of the dilemma, they won't resent the person taking the last portion, and may even thank them for it.

When there are no non-Galicians around, the situation can require a bit of negotiation and diplomacy:

"So, why is there a Padrón pepper left?"
"The Galician's dilemma!"
"I know, but it needs to go."
"You can take it if you want it."
"Don't be absurd! It's clearly saying your name."

Etc., etc.

por jacobo el May 17, 2015 07:35 PM

La vuelta de las patentes de software en España

En el año 2005, desde el Grupo de Progamadores y Usuarios de Linux nos movilizamos contra la "Directiva de la Unión Europea sobre patentabilidad de invenciones implementadas por ordenador"[0]. Acompañamos en esta protesta a otros grupos de usuarios y desarrolladores de software libre de Europa, junto a profesionales, profesores y alumnos de profesiones relacionadas con las TICs.

En nuestro caso, tuvimos que realizar un arduo trabajo. Siendo GPUL una asociación eminentemente de estudiantes, teníamos hacer comprender al alumnado de la Facultad de Informática de la UDC los problemas que supondrían la aprobación de una directiva de este calibre, y conseguir su implicación. Por otra parte, complicado era polemizar con un importante sector del profesorado y el personal investigador. Ciertos sectores de la Facultad eran claramente favorables a la directiva, como también lo eran hacia los modelos de software privativo; sin embargo, también nos sorprendió el rechazo, más sutil e hipócrita, de algunas personas que, con una mano agarraban y abrazaban el softwar libre, y con la otra, estaban patentando todo lo que podían, intentando quedarse para sí tecnología pagada con los fondos públicos a los que todos contribuímos. Cierto es que uno de los vicios de la investigación universitaria en España son los "premios" que el estado proporciona en forma de financiación y ascensos cuando un grupo de investigación registra patentes en sus actividades (como también premian artículos en inglés en revistas de supuesto alto impacto, algo que ha generado un "mercado de influencias" interesante).

Al final, nuestra protesta tuvo eco, incluso en los medios generalistas locales. Y conseguimos que la Junta de Facultad aprobara un comunicado rechazando la directiva europea. Y, pese a las presiones de los lobbies, la directiva no salió adelante, con 648 votos en contra frente a 14 a favor y 18 abstenciones en el Parlamento Europeo.

Como era de esperar, los grupos de presión que impulsaron esta iniciativa, no se quedaron quietos. Y, pasados los años, vuelven a atacar, esta vez a nivel nacional [1][2]. Como alerta el veterano activista pro software libre, el teniente coronel Fernando Acero Martín, el nuevo proyecto de ley sobre patentes en España[3], modificando de forma sutil la ley vigente de 1985, abre una puerta trasera para la patentabilidad de software en nuestro país [4][5].

Diez años no nos han vuelto idiotas. Los que llevamos mucho tiempo en el movimiento por el software libre, seguimos siendo conscientes de la gravedad de este tema. Y, pese a que los catetos tecnológicos que tenemos por legisladores y gobernantes nos intenten vender lo contrario, sabemos lo que la aprobación de cualquier vía para la patentabilidad del software supone. Y es que, recordemos que hoy la mayoría de los servidores de internet, los dispostivos móviles, muchos "cacharros inteligentes" de uso cotidiano, están funcionando gracias al trabajo de miles de desarrolladores que, de forma individual, en pequeños colectivos de hackers, desde las PyMEs y las grandes empresas del sector, han desarrollado los sistemas operativos y aplicaciones con los que funcionan. Este trabajo sería extremadamente complicado o incluso imposible teniendo que soportar las trabas legales y asumiendo los costes legales y administrativos de estar continuamente a la defensiva en lo que se convertiría una guerra contínua, donde los que ganan son  grandes empresas y corporaciones, que usan las patentes de software en los países que lo consienten, no como base de desarrollo tecnológico y empresarial, sino como arma de destrucción masiva disparada por sus caros gabinetes legales, contra su competencia, sea esta grande o pequeña.

En definitiva, este es, de nuevo, un momento para la lucha. Lucha no sólo por el movimiento del software libre: lucha por la defensa del futuro tecnológico de nuestro país. Para que no nos devuelvan a la Edad Media como algunos parecen pretender.

[0]http://es.wikipedia.org/wiki/Directiva_de_la_Uni%C3%B3n_Europea_sobre_pa...
[1]http://www.rtve.es/noticias/20150309/nueva-ley-patentes-preve-proteger-s...
[2]http://www.eldiario.es/turing/software_libre/patentes-acechan-software-v...
[3]https://intranet.congreso.es/portal/page/portal/Congreso/PopUpCGI?CMD=VE...
[4]http://fernando-acero.livejournal.com/98919.html
[5]http://www.eldiario.es/turing/software_libre/PSOE-UPyD-unicos-patentar-s...

por tsao el May 17, 2015 03:27 PM

May 15, 2015

I got myself an address stamp

I got myself an address stamp.

My address, stamped once in a piece of paper.

It is fun to use.

My address, stamped several times in a piece of paper.

It is indeed quite fun to use.

Several pieces of paper covered in stampings of my address.

I think I may need to buy a new ink pad soon.

por jacobo el May 15, 2015 03:57 PM

April 30, 2015

Some rain-related Galician sayings

Some time ago I wrote a post about some popular sayings in the English language. Today it's time to talk about a couple of funny sayings in the Galician language.

As you may know, I'm from Spain, but when I tell people I always specify that I'm from the part of Spain where it's rarely sunny and people aren't particularly fond of flamenco. Then people often say "oh, Basque?" and I explain that the Basque Country is in the North, while I'm from Galicia, in the North-West. In Galicia we have our own language, fittingly called "Galician", which is related to Portuguese (they were one and the same language until the 14th century, though there are many people who claim they still are.)

Galicia is notorious in Spain because it's way rainier than the rest of the country. Its capital is Santiago de Compostela, my hometown, which is notorious in Galicia because it's way rainier than the rest of the region. So I assume it wouldn't surprise you if rain featured heavily in our popular sayings. This post, in fact, is about three of those sayings.

The first one is a proverb: "nunca choveu que non escampara", which means "it's never rained for so long that it didn't eventually stop". For my region, that's quite an uncharacteristically optimistic saying that means that bad things don't last forever, so there's no need to despair. Or perhaps it's just that it rains so relentlessly that people need to be reminded that it will stop.

The second one is something you say to someone who's acting foolish or making little sense. "A ti chóveche", literally means "it's raining on/in you". You can say it too of a third person: "a ese home chóvelle" ("it's raining in that man"). I'm guessing it's short for "a ti chóveche na cabeza" ("it's raining inside your head"), which to me is quite evocative. It's basically saying that this person's head is so empty there's enough room for water to evaporate, gather into clouds, condensate and precipitate in the form of free-falling drops of water. That's quite a lot of emptiness.

The third and final one for today is "xa choveu", which means "it has rained [quite a bit since then]". You say it to express that quite a long time has elapsed since something. For example, you show someone a photo of your childhood, and this conversation ensues:

"Mira que delgado estaba nesta foto." ("Look how thin I was in this photo.")
"Xa choveu." ("It's been quite a while since.")
"Vai tomar polo cu." ("I resent that remark.")

The last sentence is not translated literally, because I've often observed that English speakers have a lower tolerance for profanity than Galician speakers :-)

For now, that's it for rain-related Galician language sayings. I should probably write a post about Galician language profanity, since we have quite a bit of it, and it's quite creative even for rest-of-Spain standards :-)

(Post your comments in the accompanying Google+ post.)

por jacobo el April 30, 2015 03:49 PM

April 01, 2015

Xoves Libres: Modelos de negocio colaborativos

Modelos de negocio colaborativos. Por que o Software Libre domina no mercado?

Dende GPUL sempre estamos tratando de mostrar que isto do Software Libre aporta moreas de valores e vantaxes fronte as alternativas privativas pero fai tempo que queríamos aproveitar esta auxe do emprendemento TIC en Galicia para mostrarlles a aqueles que están a por en marcha un proxecto empresarial relacionado co software, que existen alternativas aos modelos de negocio tradicionais, respectando as liberdades do posible cliente e incluso axilizando o desenvolvemento e a expansión dunha iniciativa empresarial baseada en software libre.

É por isto que o próximo xoves 9 de Abril imos ter as 17:00 no xa mítico laboratorio 0.1w da Facultade de Informática a un auténtico experto como é Roberto Brenlla nisto dos modelos de negocio con software libre.

Roberto estudou económicas na Universidade de Santiago de Compostela e foi un dos fundadores da empresa galega Tegnix, adicada a consultoría baseada en software libre e a xestión de sistemas en entornos empresariais, traballando tamén en proxectos para a administración pública como o proxecto Abalar da Xunta de Galicia con máis de 75.000 equipos administrados remotamente.

Presidíu a Asociación de Empresas Galegas de Software Libre (AGASOL) durante varios anos e leva toda a súa vida adicado ao software libre, converténdose nun experto no tema e actualmente traballa como Freelance experto en estratexias Software Libre e Open Source.

AdjuntoTamaño
cartel_modelos_negocio.png201.66 KB

por gpul el April 01, 2015 09:50 PM

March 30, 2015

Xoves Libres: Wikimaraton da Galipedia na UDC

Dende GPUL xa temos preparada unha batería de eventos para que o próximo mes teñades todos os xoves ocupados ;)

De momento imos adiantando que imos colaborar coa Facultade de Informática na organización dunha Wikimaratón da Galipedia que durará dóus días e se enmarcará no ciclo dos Xoves Libres. Na seguinte nota de prensa tedes máis información, ídevos apuntando que quedades sen prazas!!

 

 

Os vindeiros 16 e 17 de abril de 2015 terán lugar na Facultade de Informática da Universidade da Coruña  as Primeiras xornadas "Coñece a Galipedia na UDC", unha mestura entre unha xornada de portas abertas e unha wikimaratón da Galipedia. Estas xornadas, promovidas pola Facultade de Informática da Universidade da Coruña, GPUL e a Galipedia, co apoio do Servizo de Normalización Lingüística da Universidade da Coruña, pretenden achegar a enciclopedia libre en galego a tódolos interesados. Os actos contarán cunha charla de introdución á edición da Galipedia e con actividades máis avanzadas como a introdución ó manexo de bots.Tamén se fará un pequeno wikimaratón de artigos relacionados coa Universidade da Coruña e outros temas, como a informática, e terá lugar a entrega de premios e a presentación dos traballos gañadores do concurso Wikinformática, enfocado á visibilización do papel da muller nas TIC entre os estudantes de secundaria.

Para poder aproveitar este evento recoméndase asistir, xa que os participantes terán a oportunidade de coñecer a outros wikipedistas e darlle visibilidade ó proxecto. Non obstante, tamén é posible participar online. Para ambos tipos de participación, é necesario ter creado un usuario, podes facelo aquí, e inscribirse na lista de participantes.

 

Máis información en https://gl.wikipedia.org/wiki/Wikipedia:Primeiras_xornadas_Co%C3%B1ece_a_Galipedia_na_UDC

Programa de actividades

Xoves 16 de abril

  • 17:00-17:30 Introdución á Galipedia e á edición co Editor visual (Elisardo Juncal, Galipedia)
  • 17:00-17:15 Planificación e distribución do traballo de edición
  • 17:15-19:30 Edición dos artigos propostos ou doutros de interese para os asistentes
  • 19:30-19:45 Avaliación da xornada

Venres 17 de abril

  • 16:00-17:00 Entrega de premios e presentación dos traballos de Wikinformática
  • 17:00-17:30 Introdución ó uso de bots na Galipedia (Roi Ardao López "Banjo", Galipedia)
  • 17:00-17:15 Planificación e distribución do traballo de edición
  • 17:15-19:30 Edición dos artigos propostos ou doutros de interese para os asistentes
  • 19:30-19:45 Avaliación da xornada

por gpul el March 30, 2015 03:52 PM

Bringing sanity back to my T440s

As a long time Thinkpad’s trackpoint user and owner of a Lenovo T440s, I always felt quite frustrated with the clickpad featured in this laptop, since it basically ditched away all the physical buttons I got so used to, and replace them all with a giant, weird and noisy “clickpad”.

Fortunately, following Peter Hutterer’s post on X.Org Synaptics support for the T440, I managed to get a semi-decent configuration where I basically disabled any movement in the touchpad and used it three giant soft buttons. It certainly took quite some time to get used to it and avoid making too many mistakes but it was at least usable thanks to that.

Then, just a few months ago from now, I learned about the new T450 laptops and how they introduced again the physical buttons for the trackpoint there… and felt happy and upset at the same time: happy to know that Lenovo finally reconsidered their position and decided to bring back some sanity to the legendary trackpoint, but upset because I realized I had bought the only Thinkpad to have ever featured such an insane device.

Luckily enough, I recently found that someone was selling this T450’s new touchpads with the physical buttons in eBay, and people in many places seemed to confirm that it would fit and work in the T440, T440s and T440p (just google for it), so I decided to give it a try.

So, the new touchpad arrived here last week and I did try to fit it, although I got a bit scared at some point and decided to step back and leave it for a while. After all, this laptop is 7 months old and I did not want to risk breaking it either :-). But then I kept reading the T440s’s Hardware Maintenance Manual in my spare time and learned that I was actually closer than what I thought, so decided to give it a try this weekend again… and this is the final result:

T440s with trackpoint buttons!

Initially, I thought of writing a detailed step by step guide on how to do the installation, but in the end it all boils down to removing the system board so that you can unscrew the old clickpad and screw the new one, so you just follow the steps in the T440s’s Hardware Maintenance Manual for that, and you should be fine.

If any, I’d just add that you don’t really need to remove the heatskink from the board, but just unplug the fan’s power cord, and that you can actually do this without removing the board completely, but just lifting it enough to manipulate the 2 hidden screws under it. Also, I do recommend disconnecting all the wires connected to the main board as well as removing the memory module, the Wifi/3G cards and the keyboard. You can probably lift the board without doing that, but I’d rather follow those extra steps to avoid nasty surprises.

Last, please remember that this model has a built-in battery that you need to disable from the BIOS before starting to work with it. This is a new step compared to older models (therefore easy to overlook) and quite an important one, so make sure you don’t forget about it!

Anyway, as you can see the new device fits perfectly fine in the hole of the former clickpad and it even gets recognized as a Synaptics touchpad, which is good. And even better, the touchpad works perfectly fine out of the box, with all the usual features you might expect: soft left and right buttons, 2-finger scrolling, tap to click…

The only problem is that the trackpoint’s buttons would not work that well: the left and right buttons would translate into “scroll up” and “scroll down” and the middle button would simply not work at all. Fortunately, this is also covered in Petter Hutterer’s blog, where he explains that all the problems I was seeing are expected at this moment, since some patches in the Kernel are needed for the 3 physical buttons to become visible via the trackpoint again.

But in any case, for those like me who just don’t care about the touchpad at all, this comment in the tracking bug for this issue explains a workaround to get the physical trackpoint buttons working well right now (middle button included), simply by disabling the Synaptics driver and enabling psmouse configured to use the imps protocol.

And because I’m using Fedora 21, I followed the recommendation there and simply added psmouse.proto=imps to the GRUB_CMDLINE_LINUX line in /etc/default/grub, then run grub2-mkconfig -o /boot/grub2/grub.cfg, and that did the trick for me.

Now I went into the BIOS and disabled the “trackpad” option, not to get the mouse moving and clicking randomly, and finally enabled scrolling with the middle-button by creating a file in /etc/X11/xorg.conf.d/20-trackpoint.conf (based on the one from my old x201), like this:

Section "InputClass"
        Identifier "Trackpoint Wheel Emulation"
        MatchProduct "PS/2 Synaptics TouchPad"
        MatchDriver "evdev"
        Option  "EmulateWheel"  "true"
        Option  "EmulateWheelButton" "2"
        Option  "EmulateWheelInertia" "10"
        Option  "EmulateWheelTimeout" "190"
        Option  "Emulate3Buttons" "false"
        Option  "XAxisMapping"  "6 7"
        Option  "YAxisMapping"  "4 5"
EndSection

So that’s it. I suppose I will keep checking the status of the proper fix in the tracking bug and eventually move to the Synaptic driver again once all those issue get fixed, but for now this setup is perfect for me, and definitely way better than what I had before.

I only hope that I hadn’t forgotten to plug a cable when assembling everything back. At least, I can tell I haven’t got any screw left and everything I’ve tested seems to work as expected, so I guess it’s probably fine. Fingers crossed!

por mario el March 30, 2015 01:32 AM

March 24, 2015

Hacklab Impresión 3D (nova sesión)

Dende GPUL prácenos presentar a continuación do HackLab de impresión 3D.
A próxima sesión terá lugar o xoves 26 de marzo no laboratorio 0.1w ás 17:00 da Facultade de Informática da Coruña.

Dende GPUL agradecemos que se estiveras interesado neste HackLab e non te inscribiche xa, o fagas tan pronto poidas no formulario do evento. Aínda que non poidas acudir a esta sesión.

Xa dende hai tempo se está a ver unha gran evolución no mundo do
hardware libre, cunha maior presencia na sociedade. Froito desta
evolución son as impresoras 3d e o proxecto Clone Wars que
documentan como crear a túa propia impresora 3D.

A idea do HackLab é a de crear un grupo de traballo cooperativo de carácter aberto e fomentando a aprendizaxe autónoma e cooperativa de tódolos membros.

Recordamos que aínda que se está a montar a impresora 3D para GPUL o evento está aberto a que se tes unha impresora para montar a traias e poidamos aprender todos xuntos como funciona este mundo.

Gracias e contamos contigo nesta andaina.

AdjuntoTamaño
cartel_impresion.png301.12 KB

por gpul el March 24, 2015 12:26 PM