May 24, 2016

I/O bursts with QEMU 2.6

QEMU 2.6 was released a few days ago. One new feature that I have been working on is the new way to configure I/O limits in disk drives to allow bursts and increase the responsiveness of the virtual machine. In this post I’ll try to explain how it works.

The basic settings

First I will summarize the basic settings that were already available in earlier versions of QEMU.

Two aspects of the disk I/O can be limited: the number of bytes per second and the number of operations per second (IOPS). For each one of them the user can set a global limit or separate limits for read and write operations. This gives us a total of six different parameters.

I/O limits can be set using the throttling.* parameters of -drive, or using the QMP block_set_io_throttle command. These are the names of the parameters for both cases:

-drive block_set_io_throttle
throttling.iops-total iops
throttling.iops-read iops_rd
throttling.iops-write iops_wr
throttling.bps-total bps
throttling.bps-read bps_rd
throttling.bps-write bps_wr

It is possible to set limits for both IOPS and bps at the same time, and for each case we can decide whether to have separate read and write limits or not, but if iops-total is set then neither iops-read nor iops-write can be set. The same applies to bps-total and bps-read/write.

The default value of these parameters is 0, and it means unlimited.

In its most basic usage, the user can add a drive to QEMU with a limit of, say, 100 IOPS with the following -drive line:

-drive file=hd0.qcow2,throttling.iops-total=100

We can do the same using QMP. In this case all these parameters are mandatory, so we must set to 0 the ones that we don’t want to limit:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0
     }
   }

I/O bursts

While the settings that we have just seen are enough to prevent the virtual machine from performing too much I/O, it can be useful to allow the user to exceed those limits occasionally. This way we can have a more responsive VM that is able to cope better with peaks of activity while keeping the average limits lower the rest of the time.

Starting from QEMU 2.6, it is possible to allow the user to do bursts of I/O for a configurable amount of time. A burst is an amount of I/O that can exceed the basic limit, and there are two parameters that control them: their length and the maximum amount of I/O they allow. These two can be configured separately for each one of the six basic parameters described in the previous section, but here we’ll use ‘iops-total’ as an example.

The I/O limit during bursts is set using ‘iops-total-max’, and the maximum length (in seconds) is set with ‘iops-total-max-length’. So if we want to configure a drive with a basic limit of 100 IOPS and allow bursts of 2000 IOPS for 60 seconds, we would do it like this (the line is split for clarity):

   -drive file=hd0.qcow2,
          throttling.iops-total=100,
          throttling.iops-total-max=2000,
          throttling.iops-total-max-length=60

Or with QMP:

   { "execute": "block_set_io_throttle",
     "arguments": {
        "device": "virtio0",
        "iops": 100,
        "iops_rd": 0,
        "iops_wr": 0,
        "bps": 0,
        "bps_rd": 0,
        "bps_wr": 0,
        "iops_max": 2000,
        "iops_max_length": 60,
     }
   }

With this, the user can perform I/O on hd0.qcow2 at a rate of 2000 IOPS for 1 minute before it’s throttled down to 100 IOPS.

The user will be able to do bursts again if there’s a sufficiently long period of time with unused I/O (see below for details).

The default value for ‘iops-total-max’ is 0 and it means that bursts are not allowed. ‘iops-total-max-length’ can only be set if ‘iops-total-max’ is set as well, and its default value is 1 second.

Controlling the size of I/O operations

When applying IOPS limits all I/O operations are treated equally regardless of their size. This means that the user can take advantage of this in order to circumvent the limits and submit one huge I/O request instead of several smaller ones.

QEMU provides a setting called throttling.iops-size to prevent this from happening. This setting specifies the size (in bytes) of an I/O request for accounting purposes. Larger requests will be counted proportionally to this size.

For example, if iops-size is set to 4096 then an 8KB request will be counted as two, and a 6KB request will be counted as one and a half. This only applies to requests larger than iops-size: smaller requests will be always counted as one, no matter their size.

The default value of iops-size is 0 and it means that the size of the requests is never taken into account when applying IOPS limits.

Applying I/O limits to groups of disks

In all the examples so far we have seen how to apply limits to the I/O performed on individual drives, but QEMU allows grouping drives so they all share the same limits.

This feature is available since QEMU 2.4. Please refer to the post I wrote when it was published for more details.

The Leaky Bucket algorithm

I/O limits in QEMU are implemented using the leaky bucket algorithm (specifically the “Leaky bucket as a meter” variant).

This algorithm uses the analogy of a bucket that leaks water constantly. The water that gets into the bucket represents the I/O that has been performed, and no more I/O is allowed once the bucket is full.

To see the way this corresponds to the throttling parameters in QEMU, consider the following values:

  iops-total=100
  iops-total-max=2000
  iops-total-max-length=60

bucket

The bucket is initially empty, therefore water can be added until it’s full at a rate of 2000 IOPS (the burst rate). Once the bucket is full we can only add as much water as it leaks, therefore the I/O rate is reduced to 100 IOPS. If we add less water than it leaks then the bucket will start to empty, allowing for bursts again.

Note that since water is leaking from the bucket even during bursts, it will take a bit more than 60 seconds at 2000 IOPS to fill it up. After those 60 seconds the bucket will have leaked 60 x 100 = 6000, allowing for 3 more seconds of I/O at 2000 IOPS.

Also, due to the way the algorithm works, longer burst can be done at a lower I/O rate, e.g. 1000 IOPS during 120 seconds.

Acknowledgments

As usual, my work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the QEMU development team.

igalia-outscale

Enjoy QEMU 2.6!

por berto el May 24, 2016 11:47 AM

May 17, 2016

GPUL Summer Of Code

De que vai isto?

No noso afán por difundir o software libre buscamos contratar a un estudante interesado por estas tecnoloxías durante 3 meses, a media xornada; para que, con tecnoloxías desta era, nos axude cun par de proxectos baseados en tecnoloxías web que queremos impulsar dende GPUL, por suposto software libre.

 Requisitos básicos

  • Indispensable ser defensor do movemento do software libre.

  • Para que o proxecto web sexa mantible necesitamos que coñezas algo de:

    • Patrón MVC (Modelo Vista Controlador).

    • Programación Orientada a Obxectos.

    • Git.

    • ORM (Object-Relational Mapping) ou similar.

    Non é preciso ser experto. Ao desenvolver software libre, calquera poderá axudarche a mellorar o teu código.

  • Ter moitas ganas de aprender.

  • Non imos mirar para o título asique tanto se estudas outra carreira, un ciclo ou se eres autodidacta tamén te podes apuntar sen problema.

 

Sería xenial (non é un requisito, pero se queres destacar entre os máis, estas son unhas guías do que nos interesa)

  • Coñecemento dalgún framework web: Django, Rails, ExpressJS, SpringMVC, Laravel…

  • Que en lugar de jQuery nos digas que sabes usar JavaScript en AngularJS ou ReactJS…

  • Metodoloxías áxiles: Scrum, eXtreme Programming, Code Review, Continuous Integration…

  • Que coñezas o teu IDE favorito: Eclipse, Atom, Emacs, vim…

  • Administración de sistemas Linux: Bash, Debian, Docker, SSH…

  • Que teñas colaborado con comunidades de software libre, con desenvolvementos, organizando eventos ou de algún outro xeito.

  • Que queiras utilizar isto como parte do teu Traballo Fin de Mestrado, de Grado ou de Proxecto de Fin de Carreira ou similar.

  

Que ofrecemos

  • Horario flexible.

  • Utilizar o noso local na facultade para traballar se o prefires ao teletraballo.

  • Reunións de seguimento tódalas semanas.

  • Desenvolver software libre da man de mentores con experiencia.

  • Asesorarémoste para presentar o teu proxecto ao Concurso Universitario de Software Libre e ao Premio ao mellor TFG de Software Libre ou similares.

  • Integrarte en GPUL e en tódalas actividades que organizamos: charlas técnicas nas que aprender dos mellores profesionais, hackatons e viaxes en grupo a eventos como a FOSDEM.


Sobre GPUL

En GPUL somos unha asociación, creada na FIC no ano 1998 para fomentar e expandir o uso de software libre. Hoxe continuamos activos para tratar de facer de mundo un lugar mellor co fomento da cultura libre, sempre nun bo ambiente, aprendendo e medrando, persoal e profesionalmente.

Fomos os responsables da organización de varios eventos internacionais de primeiro nivel no ámbito, como a GUADEC no 2012 ou a Akademy no 2015 (os principais encontros dos usuarios e desenvolvedores de Gnome e KDE respectivamente).

Extrapolamos o que aprendemos ahí para volcarnos logo noutros eventos, que aínda que non sexan internacionais, organizámolos co mesmo mimo, como son as Xornadas de Introdución a GNU/Linux ou as Xornadas Libres, nas que sempre contamos con xente moi boa de múltiples lugares do mundo a que veñan contarnos cousas.

Agora ademais, cos GPUL Labs estamos xuntando unha comunidade de desenvolvedores de software libre na nosa cidade, e así non ter que marchar a outros lugares para facer cousas interesantes con tecnoloxías modernas e construír proxectos reais.

 

Proceso

Mándanos o teu currículo a info@gpul.org e un enlace a LinkedIn (se tes) así como o teu expediente académico.

Se fixeches software libre no pasado, inclúe un enlace a onde poidamos ver as túas contribucións. Se non, non te preocupes! Pero esta ben que nos achegues tamén algo de código que escribiras ti e que te faga sentir orgulloso, para termos algo máis co que poder valorarte, unha práctica da carreira ou algo similar pode valer.

E por último pode que queiramos ter unha entrevista contigo, ben en vivo ou por videoconferencia.

Aceptaranse propostas ata o día 12 de Xuño incluido e resolverase a máis tardar o día 15 para facilitar que o alumno seleccionado, se decide utilizar o proxecto desenvolto como PFC/TFG/TFM, poida presentar o anteproxecto a tempo.

 

Salario e contrato

Ofrecemos un contrato a media xornada durante 3 meses cunha retribución de 400 € ó mes para que poidas combinalo cos teus estudos e ao mesmo tempo ter unha primeira experiencia laboral.

O plan inicial é realizalo entre os meses de Xuño e Setembro e somos flexibles co horario, se tes algunha circunstancia especial como exámenes ou similar non dubides en falar connosco.

 

Trucos

Se utilizas na actualidade software privativo habitualmente, fala con nós. En serio, darte conta e o interese en querer cambialo vainos causar boa impresión.

Non nos envíes documentos en formatos privativos, coma doc ou docx. Porque quéresnos causar boa impresión. Non si?

Se aínda non fixeches software libre, non te preocupes. Tal vez o teu primeiro proxecto libre sexa co GPUL.

Non asines o correo con cousas como “Enviado desde mi iPhone”.

 

Esta actividade forma parte das actividades que a asociación desenvolve en colaboración coa AMTEGA ao abeiro do convenio de colaboración asinado para a difusión do software libre en Galicia e forma parte do Plan de Acción en materia de software libre 2016 da Amtega.

AdjuntoTamaño
oferta.png309.22 KB

por gpul el May 17, 2016 03:30 PM

April 13, 2016

Chromium Browser on xdg-app

Last week I had the chance to attend for 3 days the GNOME Software Hackfest, organized by Richard Hughes and hosted at the brand new Red Hat’s London office.

And besides meeting new people and some old friends (which I admit to be one of my favourite aspects about attending these kind of events), and discovering what it’s now my new favourite place for fast-food near London bridge, I happened to learn quite a few new things while working on my particular personal quest: getting Chromium browser to run as an xdg-app.

While this might not seem to be an immediate need for Endless right now (we currently ship a Chromium-based browser as part of our OSTree based system), this was definitely something worth exploring as we are now implementing the next version of our App Center (which will be based on GNOME Software and xdg-app). Chromium updates very frequently with fixes and new features, and so being able to update it separately and more quickly than the OS is very valuable.

Endless OS App Center
Screenshot of Endless OS’s current App Center

So, while Joaquim and Rob were working on the GNOME Software related bits and discussing aspects related to Continuous Integration with the rest of the crowd, I spent some time learning about xdg-app and trying to get Chromium to build that way which, unsurprisingly, was not an easy task.

Fortunately, the base documentation about xdg-app together with Alex Larsson’s blog post series about this topic (which I wholeheartedly recommend reading) and some experimentation from my side was enough to get started with the whole thing, and I was quickly on my way to fixing build issues, adding missing deps and the like.

Note that my goal at this time was not to get a fully featured Chromium browser running, but to get something running based on the version that we use use in Endless (Chromium 48.0.2564.82), with a couple of things disabled for now (e.g. chromium’s own sandbox, udev integration…) and putting, of course, some holes in the xdg-app configuration so that Chromium can access the system’s parts that are needed for it to function (e.g. network, X11, shared memory, pulseaudio…).

Of course, the long term goal is to close as many of those holes as possible using Portals instead, as well as not giving up on Chromium’s own sandbox right away (some work will be needed here, since `setuid` binaries are a no-go in xdg-app’s world), but for the time being I’m pretty satisfied (and kind of surprised, even) that I managed to get the whole beast built and running after 4 days of work since I started :-).

But, as Alberto usually says… “screencast or it didn’t happen!”, so I recorded a video yesterday to properly share my excitement with the world. Here you have it:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/euwSnOm89hM" width="560"></iframe>
[VIDEO: Chromium Browser running as an xdg-app]

As mentioned above, this is work-in-progress stuff, so please hold your horses and manage your expectations wisely. It’s not quite there yet in terms of what I’d like to see, but definitely a step forward in the right direction, and something I hope will be useful not only for us, but for the entire Linux community as a whole. Should you were curious about the current status of the whole thing, feel free to check the relevant files at its git repository here.

Last, I would like to finish this blog post saying thanks specially to Richard Hughes for organizing this event, as well as the GNOME Foundation and Red Hat for their support in the development of GNOME Software and xdg-app. Finally, I’d also like to thank my employer Endless for supporting me to attend this hackfest. It’s been a terrific week indeed… thank you all!

Credit to Georges Stavracas

Credit to Georges Stavracas

por mario el April 13, 2016 11:17 AM

February 18, 2016

Improving Media Source Extensions on WebKit ports based on GStreamer

During 2014 I started to become interested on how GStreamer was used in WebKit to play media content and how it was related to Media Source Extensions (MSE). Along 2015, my company Igalia strenghtened its cooperation with Metrological to enhance the multimedia support in their customized version of WebKitForWayland, the web platform they use for their products for the set-top box market. This was an opportunity to do really interesting things in the multimedia field on a really nice hardware platform: Raspberry Pi.

What are Media Source Extensions?

Normal URL playback in the <video> tag works by configuring the platform player (GStreamer in our case) with a source HTTP URL, so it behaves much like any other external player, downloading the content and showing it in a window. Special cases such as Dynamic Adaptive Streaming over HTTP (DASH) are automatically handled by the player, which becomes more complex. At the same time, the JavaScript code in the webpage has no way to know what’s happening with the quality changes in the stream.

The MSE specification lets the authors move the responsibility to the JavaScript side in that kind of scenarios. A Blob object (Blob URL) can be configured to get its data from a MediaSource object. The MediaSource object can instantiate SourceBuffer objects. Video and Audio elements in the webpage can be configured with those Blob URLs. With this setup, JavaScript can manually feed binary data to the player by appending it to the SourceBuffer objects. The data is buffered and the playback time ranges generated by the data are accessible to JavaScript. The web page (and not the player) has now the control on the data being buffered, its quality, codec and procedence.  Now it’s even possible to synthesize the media data programmatically if needed, opening the door to media editors and media effects coded in JavaScript.

mse1

MSE is being adopted by the main content broadcasters on the Internet. It’s required by YouTube for its dedicated interface for TV-like devices and they even have an MSE conformance test suite that hardware manufacturers wanting to get certified for that platform must pass.

MSE architecture in WebKit

WebKit is a multiplatform framework with an end user API layer (WebKit2), an internal layer common to all platforms (WebCore) and particular implementations for each platform (GObject + GStreamer, in our case). Google and Apple have done a great work bringing MSE to WebKit. They have led the effort to implement the common WebCore abstractions needed to support MSE, such as MediaSource, SourceBuffer, MediaPlayer and the integration with HTMLMediaElement (video tag). They have also provided generic platform interfaces (MediaPlayerPrivateInterface, MediaSourcePrivate, SourceBufferPrivate) a working platform implementation for Mac OS X and a mock platform for testing.

mse2

The main contributions to the platform implementation for ports using GStreamer for media playback were done by Stephane Jadaud and Sebastian Dröge on bugs #99065 (initial implementation with hardcoded SourceBuffers for audio and video), #139441 (multiple SourceBuffers) and #140078 (support for tracks, more containers and encoding formats). This last patch hasn’t still been merged in trunk, but I used it as the starting point of the work to be done.

GStreamer, unlike other media frameworks, is strongly based on the concept of pipeline: the data traverses a series of linked elements (sources, demuxers, decoders, sinks) which process it in stages. At a given point in time, different pieces of data are in the pipeline at the same time in varying degrees of processing stages. In the case of MSE, a special WebKitMediaSrc GStreamer element is used as the data source in the pipeline and also serves as interface with the upper MSE layer, acting as client of MediaSource and SourceBuffer. WebKitMediaSrc is spawned by GstPlayBin (a container which manages everything automatically inside) when an MSE SourceBuffer is added to the MediaSource. The MediaSource is linked with the MediaPlayer, which has MediaPlayerPrivateGStreamer as private platform implementation. In the design we were using at that time, WebKitMediaSrc was responsible for demuxing the data appended on each SourceBuffer into several streams (I’ve never seen more than one stream per SourceBuffer, though) and for reporting the statistics and the samples themselves to the upper layer according to the MSE specs. To do that, the WebKitMediaSrc encapsulated an appsrc, a demuxer and a parser per source. The remaining pipeline elements after WebKitMediaSrc were in charge of decoding and playback.

Processing appends with GStreamer

The MSE implementation in Chromium uses a chunk demuxer to parse (demux) the data appended to the SourceBuffers. It keeps the parsing state and provides a self-contained way to perform the demuxing. Reusing that Chromium code would have been the easiest solution. However, GStreamer is a powerful media framework and we strongly believe that the demuxing stage can be done using GStreamer as part of the pipeline.

Because of the way GStreamer works, it’s easy to know when an element outputs new data but there’s no easy way to know when it has finished processing its input without discontinuing the flow with with End Of Stream (EOS) and effectively resetting the element. One simple approach that works is to use timeouts. If the demuxer doesn’t produce any output after a given time, we consider that the append has produced all the MediaSamples it could and therefore has finished. Two different timeouts were used: one to detect when appends that produce no samples have finished (noDataToDecodeTimeout) and another to detect when no more samples are coming (lastSampleToDecodeTimeout). The former needs to be longer than the latter.

Another technical challenge was to perform append processing when the pipeline isn’t playing. While playback doesn’t start, the pipeline just prerolls (is filled with the available data until the first frame can be rendered on the screen) and then pauses there until the continuous playback can start. However, the MSE spec expects the appended data to be completely processed and delivered to the upper MSE layer first, and then it’s up to JavaScript to decide if the playback on screen must start or not. The solution was to add intermediate queue elements with a very big capacity to force a preroll stage long enough for the probes in the demuxer source (output) pads to “see” all the samples pass beyond the demuxer. This was how the pipeline looked like at that time (see also the full dump):

mse3

While focusing on making the YouTube 2015 tests pass on our Raspberry Pi 1, we realized that the generated buffered ranges had strange micro-holes (eg: [0, 4.9998]; [5.0003, 10.0]) and that was confusing the tests. Definitely, there were differences of interpretation between ChunkDemuxer and qtdemux, but this is a minor problem which can be solved by adding some extra time ranges that fill the holes. All these changes got the append feature in good shape and the we could start watching videos more or less reliably on YouTube TV for the first time.

Basic seek support

Let’s focus on some real use case for a moment. The JavaScript code can be appending video data in the [20, 25] range, audio data in the [30, 35] range (because the [20, 30] range was appended before) and we’re still playing the [0, 5] range. Our previous design let the media buffers leave the demuxer and enter in the decoder without control. This worked nice for sequential playback, but was not compatible with non-linear playback (seeks). Feeding the decoder with video data for [0, 5] plus [20, 25] causes a big pause (while the timeline traverses [5, 20]) followed by a bunch of decoding errors (the decoder needs sequential data to work).

One possible improvement to support non-linear playback is to implement buffer stealing and buffer reinjecting at the demuxer output, so the buffers never go past that point without control. A probe steals the buffers, encapsulates them inside MediaSamples, pumps them to the upper MSE layer for storage and range reporting, and finally drops them at the GStreamer level. The buffers can be later reinjected by the enqueueSample() method when JavaScript decides to start the playback in the target position. The flushAndEnqueueNonDisplayingSamples() method reinjects auxiliary samples from before the target position just to help keeping the decoder sane and with the right internal state when the useful samples are inserted. You can see the dropping and reinjection points in the updated diagram:

mse4

The synchronization issues of managing several independent timelines at once must also be had into account. Each of the ongoing append and playback operations happen in their own timeline, but the pipeline is designed to be configured for a common playback segment. The playback state (READY, PAUSED, PLAYING), the flushes needed by the seek operation and the prerolls also affect all the pipeline elements. This problem can be minimized by manipulating the segments by hand to accomodate the different timings and by getting the help of very large queues to sustain the processing in the demuxer, even when the pipeline is still in pause. These changes can solve the issues and get the “47. Seek” test working, but YouTube TV is more demanding and requires a more structured design.

Divide and conquer

At this point we decided to simplify MediaPlayerPrivateGStreamer and refactor all the MSE logic into a new subclass called MediaPlayerPrivateGStreamerMSE. After that, the unified pipeline was split into N append pipelines (one per SourceBuffer) and one playback pipeline. This change solved the synchronization issues and splitted a complex problem into two simpler ones. The AppendPipeline class, visible only to the MSE private player, is in charge of managing all the append logic. There’s one instance for each of the N append pipelines.

Each append pipeline is created by hand. It contains an appsrc (to feed data into it), a typefinder, a qtdemuxer, optionally a decoder (in case we want to suport Encrypted Media Extensions too), and an appsink (to pick parsed data). In my willing to simplify, I removed the support for all formats except ISO MP4, the only one really needed for YouTube. The other containers could be reintroduced in the future.

mse5

The playback pipeline is what remains of the old unified pipeline, but simpler. It’s still based on playbin, and the main difference is that the WebKitMediaSrc is now simpler. It consists of N sources (one per SourceBuffer) composed by an appsrc (to feed buffered samples), a parser block and the src pads. Uridecodebin is in charge of instantiating it, like before. The PlaybackPipeline class was created to take care of some of the management logic.

mse6

The AppendPipeline class manages the callback forwarding between threads, using asserts to strongly enforce the access to WebCore MSE classes from the main thread. AtomicString and all the classes inheriting from RefCounted (instead of ThreadSafeRefCounted) can’t be safely managed from different threads. This includes most of the classes used in the MSE implementation. However, the demuxer probes and other callbacks sometimes happen in the streaming thread of the corresponding element, not in the main thread, so that’s why call forwarding must be done.

AppendPipeline also uses an internal state machine to manage the different stages of the append operation and all the actions relevant for each stage (starting/stopping the timeouts, process the samples, finish the appends and manage SourceBuffer aborts).

mse7

Seek support for the real world

With this new design, the use case of a typical seek works like this (very simplified):

  1. The video may be being currently played at some position (buffered, of course).
  2. The JavaScript code appends data for the new target position to each of the video/audio SourceBuffers. Each AppendPipeline processes the data and JavaScript is aware of the new buffered ranges.
  3. JavaScript seeks to the new position. This ends up calling the seek() and doSeek() methods.
  4. MediaPlayerPrivateGStreamerMSE instructs WebKitMediaSrc to stop accepting more samples until further notice and to prepare the seek (reset the seek-data and need-data counters). The player private performs the real GStreamer seek in the playback pipeline and leaves the rest of the seek pending for when WebKitMediaSrc is ready.
  5. The GStreamer seek causes some changes in the pipeline and eventually all the appsrc in WebKitMediaSrc emit the seek-data and need-data events. Then WebKitMediaSrc notifies the player private that it’s ready to accept samples for the target position and needs data. MediaSource is notified here to seek and this triggers the enqueuing of the new data (non displaying samples and visible ones).
  6. The pending seek at player private level which was pending from step 4 continues, giving permission to WebKitMediaSrc to accept samples again.
  7. Seek is completed. The samples enqueued in step 5 flow now through the playback pipeline and the user can see the video from the target position.

That was just the typical case, but more complex scenarios are also supported. This includes multiple seeks (pressing the forward/backward button several times), seeks to buffered areas (the easiest ones) and to unbuffered areas (where the seek sequence needs to wait until the data for the target area is appended and buffered).

Close cooperation from qtdemux is also required in order to get accurate presentation timestamps (PTS) for the processed media. We detected a special case when appending data much forward in the media stream during a seek. Qtdemux kept generating sequential presentation timestamps, completely ignoring the TFDT atom, which tells where the timestamps of the new data block must start. I had to add a new “always-honor-tfdt” attribute to qtdemux to solve that problem.

With all these changes the YouTube 2015 and 2016 tests are green for us and YouTube TV is completely functional on a Raspberry Pi 2.

Upstreaming the code during Web Engines Hackfest 2015

All this work is currently in the Metrological WebKitForWayland repository, but it could be a great upstream contribution. Last December I was invited to the Web Engines Hackfest 2015, an event hosted in Igalia premises in A Coruña (Spain). I attended with the intention of starting the upstreaming process of our MSE implementation for GStreamer, so other ports such as WebKitGTK+ and WebKitEFL could also benefit from it. Thanks a lot to our sponsors for making it possible.

At the end of the hackfest I managed to have something that builds in a private branch. I’m currently devoting some time to work on the regressions in the YouTube 2016 tests, clean unrelated EME stuff and adapt the code to the style guidelines. Eventually, I’m going to submit the patch for review on bugzilla. There are some topics that I’d like to discuss with other engineers as part of this process, such as the interpretation of the spec regarding how the ReadyState is computed.

In parallel to the upstreaming process, our plans for the future include getting rid of the append timeouts by finding a better alternative, improving append performance and testing seek even more thoroughly with other real use cases. In the long term we should add support for appendStream() and increase the set of supported media containers and codecs at least to webm and vp8.

Let’s keep hacking!

por eocanha el February 18, 2016 08:10 PM

January 26, 2016

Semana de Anita Borg

Un ano máis volve a semana de Anita Borg a FIC co obxectivo de visibilizar o éxito acadado por moitas mulleres no eido das novas tecnoloxías.

Preséntannos un programa no que se abordará a carreira profesional con ex-alumnas da FIC, así como afrontar o deseño dixital para a diversidade.

Botádelle unha ollada e apuntade as charlas que seguro que vos parecen interesantes ;)

https://wiki.fic.udc.es/semanaanitaborg/eventos/coruna_2016.html

AdjuntoTamaño
POSTER.png325.63 KB

por gpul el January 26, 2016 01:13 AM

January 25, 2016

Comezan os GPUL Labs

Este ano en GPUL decidimos que había que darlle unha volta as nosas actividades habituais e lanzámonos a formar unha comunidade de desenvolvedores preocupados polo Software, o Hardware e a Cultura Libre aquí na Coruña e na nosa comunidade.

De xeito resumido, os <Labs/> son unha serie de talleres, charlas e hackatons de programación baseados en tecnoloxías libres co fin de realizar, de comezo a fin, un proxecto de desenvolvemento software traballando con unha Raspberry Pi, creando unha aplicación web, falando de metodoloxías áxiles de desenvolvementeo ou incluso de boas prácticas como code review ou integración continua.

Se queredes coñecer máis, non dubidedes en pasarvos pola web dos Labs onde poderedes inscribirvos, ver as actividades que pensamos facer, e se queredes, tamén poderedes seguir os videos e o material das actividades, dende o seguinte repositorio de código.

Contamos coa vosa asistencia para montar unha enorme e activa comunidade de Software Libre na Coruña :)

 

 

AdjuntoTamaño
labs-logo.png27.38 KB

por gpul el January 25, 2016 11:45 PM

January 18, 2016

Asamblea Extraordinaria de GPUL

Pola presente, convócase Asamblea Extraordinaria de GPUL para o mercores 3 de febreiro de 2016 na Aula de Graos da Facultade de Informática.

    Primeira convocatoria: 20:00
    Segunda convocatoria: 20:30

Orde do día:

    Lectura e aprobación, se procede, da Acta da Asemblea anterior.
    Lectura de altas e baixas de socios desde a última Asemblea.
    Inicio da votación á Xunta Directiva.
    Reconto de votos.
    Nomeamento da nova Xunta Directiva.
    Discusión e aprobación, se procede, da vontade da asociación para
    ser incluída como Asociación de Utilidad Pública (regulada polo
    RD1740/2003 e con modificacións do RD949/2015), e de inicio do
    procedemento a tal fin, se procede.
    Rogos e preguntas.

En caso de non poder celebrarse na Aula de Graos comunicarase unha aula alternativa con tempo suficiente.

Asdo.
Marcos Chavarría,
Secretario do GPUL.

por marcos.chavarria el January 18, 2016 02:35 PM

December 30, 2015

Convocatoria de Eleccións a Xunta Directiva

Pola presente, convócanse eleccións á Xunta Directiva do GPUL polas seguintes razóns:

  • A petición do Presidente.
  • Por teren transcorrido vintecatro meses desde a última convocatoria de eleccións á Xunta Directiva.

Segundo o Regulamento Electoral (adxunto), a partir de mañá, ábrese o prazo para presentar candidaturas. O calendario electoral queda da seguinte maneira:

  • Data de convocatoria: 23/12/2015
  • Presentación de candidaturas: 24/12/2015 a 08/01/2016
  • Publicación do listado provisional de candidaturas: 11/01/2016
  • Prazo para reclamacións: 11/01/2016 a 13/01/2016
  • Publicación do listado definitivo de candidaturas: 15/01/2016
  • Inicio da campaña electoral: 18/01/2016
  • Votación electrónica:
    • Solicitude: 13/01/2016 a 19/01/2016
    • Recepción de votos: 20/01/2016 ata 6 horas antes da primeira convocatoria da Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día.
  • Votación por correo ordinario:
    • Solicitude: 24/12/2015 a 4/1/2016
    • Envío de papeletas: 15/01/2016 a 19/01/2016
    • Recepción de votos: 15/01/2016 ata 6 horas antes da primeira convocatoria da Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día.
  • Convocatoria de Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día: 15/01/2016
  • Celebración de Asamblea Xeral Extraordinaria coa votación coma punto da Orde do día: 02/02/2016 a 09/02/2016

Para a votación electrónica só se admitirá o certificado dixital da FNMT.

Dende a actual Xunta Directiva animamos a todas as socias e socios a participar no proceso.

por marcos.chavarria el December 30, 2015 07:21 PM

Frogr 1.0 released

I’ve just released frogr 1.0. I can’t believe it took me 6 years to move from the 0.x series to the 1.0 release, but here it is finally. For good or bad.

Screenshot of frogr 1.0This release is again a small increment on top of the previous one that fixes a few bugs, should make the UI look a bit more consistent and “modern”, and includes some cleanups at the code level that I’ve been wanting to do for some time, like using G_DECLARE_FINAL_TYPE, which helped me get rid of ~1.7K LoC.

Last, I’ve created a few packages for Ubuntu in my PPA that you can use now already if you’re in Vivid or later while it does not get packaged by the distro itself, although I’d expect it to be eventually available via the usual means in different distros, hopefully soon. For extra information, just take a look to frogr’s website at live.gnome.org.

Now remember to take lots of pictures so that you can upload them with frogr 🙂

Happy new year!

por mario el December 30, 2015 04:04 AM

December 17, 2015

Improving disk I/O performance in QEMU 2.5 with the qcow2 L2 cache

QEMU 2.5 has just been released, with a lot of new features. As with the previous release, we have also created a video changelog.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="281" src="https://www.youtube.com/embed/lFgopoa9Rso?feature=oembed" width="500"></iframe>

I plan to write a few blog posts explaining some of the things I have been working on. In this one I’m going to talk about how to control the size of the qcow2 L2 cache. But first, let’s see why that cache is useful.

The qcow2 file format

qcow2 is the main format for disk images used by QEMU. One of the features of this format is that its size grows on demand, and the disk space is only allocated when it is actually needed by the virtual machine.

A qcow2 file is organized in units of constant size called clusters. The virtual disk seen by the guest is also divided into guest clusters of the same size. QEMU defaults to 64KB clusters, but a different value can be specified when creating a new image:

qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

In order to map the virtual disk as seen by the guest to the qcow2 image in the host, the qcow2 image contains a set of tables organized in a two-level structure. These are called the L1 and L2 tables.

There is one single L1 table per disk image. This table is small and is always kept in memory.

There can be many L2 tables, depending on how much space has been allocated in the image. Each table is one cluster in size. In order to read or write data to the virtual disk, QEMU needs to read its corresponding L2 table to find out where that data is located. Since reading the table for each I/O operation can be expensive, QEMU keeps a cache of L2 tables in memory to speed up disk access.

The L2 cache can have a dramatic impact on performance. As an example, here’s the number of I/O operations per second that I get with random read requests in a fully populated 20GB disk image:

L2 cache size Average IOPS
1 MB 5100
1,5 MB 7300
2 MB 12700
2,5 MB 63600

If you’re using an older version of QEMU you might have trouble getting the most out of the qcow2 cache because of this bug, so either upgrade to at least QEMU 2.3 or apply this patch.

(in addition to the L2 cache, QEMU also keeps a refcount cache. This is used for cluster allocation and internal snapshots, but I’m not covering it in this post. Please refer to the qcow2 documentation if you want to know more about refcount tables)

Understanding how to choose the right cache size

In order to choose the cache size we need to know how it relates to the amount of allocated space.

The amount of virtual disk that can be mapped by the L2 cache (in bytes) is:

disk_size = l2_cache_size * cluster_size / 8

With the default values for cluster_size (64KB) that is

disk_size = l2_cache_size * 8192

So in order to have a cache that can cover n GB of disk space with the default cluster size we need

l2_cache_size = disk_size_GB * 131072

QEMU has a default L2 cache of 1MB (1048576 bytes) so using the formulas we’ve just seen we have 1048576 / 131072 = 8 GB of virtual disk covered by that cache. This means that if the size of your virtual disk is larger than 8 GB you can speed up disk access by increasing the size of the L2 cache. Otherwise you’ll be fine with the defaults.

How to configure the cache size

Cache sizes can be configured using the -drive option in the command-line, or the ‘blockdev-add‘ QMP command.

There are three options available, and all of them take bytes:

There are two things that need to be taken into account:

  1. Both the L2 and refcount block caches must have a size that is a multiple of the cluster size.
  2. If you only set one of the options above, QEMU will automatically adjust the others so that the L2 cache is 4 times bigger than the refcount cache.

This means that these three options are equivalent:

-drive file=hd.qcow2,l2-cache-size=2097152
-drive file=hd.qcow2,refcount-cache-size=524288
-drive file=hd.qcow2,cache-size=2621440

Although I’m not covering the refcount cache here, it’s worth noting that it’s used much less often than the L2 cache, so it’s perfectly reasonable to keep it small:

-drive file=hd.qcow2,l2-cache-size=4194304,refcount-cache-size=262144

Reducing the memory usage

The problem with a large cache size is that it obviously needs more memory. QEMU has a separate L2 cache for each qcow2 file, so if you’re using many big images you might need a considerable amount of memory if you want to have a reasonably sized cache for each one. The problem gets worse if you add backing files and snapshots to the mix.

Consider this scenario:

Here, hd0 is a fully populated disk image, and hd1 a freshly created image as a result of a snapshot operation. Reading data from this virtual disk will fill up the L2 cache of hd0, because that’s where the actual data is read from. However hd0 itself is read-only, and if you write data to the virtual disk it will go to the active image, hd1, filling up its L2 cache as a result. At some point you’ll have in memory cache entries from hd0 that you won’t need anymore because all the data from those clusters is now retrieved from hd1.

Let’s now create a new live snapshot:

Now we have the same problem again. If we write data to the virtual disk it will go to hd2 and its L2 cache will start to fill up. At some point a significant amount of the data from the virtual disk will be in hd2, however the L2 caches of hd0 and hd1 will be full as a result of the previous operations, even if they’re no longer needed.

Imagine now a scenario with several virtual disks and a long chain of qcow2 images for each one of them. See the problem?

I wanted to improve this a bit so I was working on a new setting that allows the user to reduce the memory usage by cleaning unused cache entries when they are not being used.

This new setting is available in QEMU 2.5, and is called ‘cache-clean-interval‘. It defines an interval (in seconds) after which all cache entries that haven’t been accessed are removed from memory.

This example removes all unused cache entries every 15 minutes:

-drive file=hd.qcow2,cache-clean-interval=900

If unset, the default value for this parameter is 0 and it disables this feature.

Further information

In this post I only intended to give a brief summary of the qcow2 L2 cache and how to tune it in order to increase the I/O performance, but it is by no means an exhaustive description of the disk format.

If you want to know more about the qcow2 format here’s a few links:

Acknowledgments

My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the invaluable help of the QEMU development team.

Enjoy QEMU 2.5!

por berto el December 17, 2015 03:39 PM

December 13, 2015

The kernel ate my packets

Some time ago I had a problem with a server. It had two ethernet interfaces connected to different vlans. The main network traffic went via the default gateway in the first vlan, but there was a listening service in the other interface.

Everything was right until we tried to reach the second interface from another node out of the second vlan but near of this. It seemed there was not connection, but as I saw with tcpdump, the traffic arrived. It was a simple test, I ran a ping from the other node (10.1.2.55) and captured traffic in the second interface (10.10.1.62):

[root@blackdog ~]# tcpdump -w /tmp/inc-eth1-ping.pcap -i eth1
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
20 packets captured
20 packets received by filter
0 packets dropped by kernel
[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth1-ping.pcap
reading from file /tmp/inc-eth1-ping.pcap, link-type EN10MB (Ethernet)
01:35:15.751507 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 78, length 64
01:35:16.759271 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 79, length 64
01:35:17.767223 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 80, length 64
01:35:18.775153 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 65466, seq 81, length 64

So the ping packets arrived to the server but there was no answer via this interface. I captured traffic in the other interface but there was no answer either:

[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth0-ping.pcap |grep 10.1.2.55
[root@blackdog ~]#

Ok, that’s the cause:

[root@blackdog ~]# cat /proc/sys/net/ipv4/conf/all/rp_filter
1
And one solution:

[root@blackdog ~]# echo 2 > /proc/sys/net/ipv4/conf/all/rp_filter

So let’s see again the incoming packets at eth1:

[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth1-ping.pcap|grep 10.1.2.55
01:47:00.322056 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 1, length 64
01:47:01.323834 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 2, length 64
01:47:02.324601 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 3, length 64
01:47:03.325823 IP 10.1.2.55 > 10.10.1.62: ICMP echo request, id 42171, seq 4, length 64

And the outgoing packets at eth0:

[root@blackdog ~]# tcpdump -nnr /tmp/inc-eth0-ping.pcap|grep 10.1.2.55
01:47:18.969567 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 1, length 64
01:47:19.970800 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 2, length 64
01:47:20.969751 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 3, length 64
01:47:21.968764 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 4, length 64
01:47:22.968705 IP 10.10.1.62 > 10.1.2.55: ICMP echo reply, id 42427, seq 5, length 64

What happened here? As it says in this Red Hat note, the rp_filter kernel parameter got more strict than in previous kernel versions, so the “1” value has a different meaning. For example, in 2.6.16 kernel you can read in the documentation (/usr/share/doc/kernel-doc-2.6.18/Documentation/networking/ip-sysctl.txt):

        1 - do source validation by reversed path, as specified in RFC1812
            Recommended option for single homed hosts and stub network
            routers. Could cause troubles for complicated (not loop free)
            networks running a slow unreliable protocol (sort of RIP),
            or using static routes.

And in 2.6.32 kernels and more recent:

        1 - Strict mode as defined in RFC3704 Strict Reverse Path 
            Each incoming packet is tested against the FIB and if the interface
            is not the best reverse path the packet check will fail.
            By default failed packets are discarded.

Of course, you have another (more elegant) solution: using multiple routing tables

Thanks again to Rafa Serrada from HPE for giving me the trace for solving the problem :-)

el December 13, 2015 06:53 PM

November 26, 2015

Attending the Web Engines Hackfest

webkitgtk-hackfest-bannerIt’s certainly been a while since I attended this event for the last time, 2 years ago, when it was a WebKitGTK+ only oriented hackfest, so I guess it was a matter of time it happened again…

It will be different for me this time, though, as now my main focus won’t be on accessibility (yet I’m happy to help with that, too), but on fixing a few issues related to the WebKit2GTK+ API layer that I found while working on our platform (Endless OS), mostly related to its implementation of accelerated compositing.

Besides that, I’m particularly curious about seeing how the hackfest looks like now that it has broaden its scope to include other web engines, and I’m also quite happy to know that I’ll be visiting my home town and meeting my old colleagues and friends from Igalia for a few days, once again.

Endless Mobile logoLast, I’d like to thank my employer for sponsoring this trip, as well as Igalia for organizing this event, one more time.

See you in Coruña!

por mario el November 26, 2015 11:29 AM

November 16, 2015

GPUL Labs

Dende GPUL este ano queremos innovar un pouco na nosa planificación habitual de actividades polo que xa levamos un tempo a darlle voltas a unha nova forma de organización, coa idea de recuperar o P de Programadores do nome da asociación e tratar de volver a xerar ese sentimento de comunidade dentro do software libre da cidade da Coruña.

GPUL Labs

Este ano o plan de actividades de GPUL xirará entorno a un proxecto de desenvolvemento que comezaremos dende o principio de todo e ata onde nos leve o camiño, aprendendo primeiramente o básico dunha linguaxe como é Python así como os conceptos básicos de control de versións con un sistema moderno como GIT pero coa idea de avanzar polas diversas etapas que todo proxecto moderno de software debe superar.

Falaremos de metodoloxías áxiles de desenvolvemento, sistemas de integración continua para execución automática de tests, documentación con LaTeX, creación de APIs REST e outras cousas que vaian propoñendo todos os participantes.

Bótanos unha man

Plantexámonos este obxectivo ambicioso dende GPUL co fin de recuperar esa relación entre a comunidade informática que tanto se está a perder nos últimos anos e que queremos que sirva de trampolín para difundir o software libre entre dita comunidade, pero esta tarefa non a podemos facer solos.

PRECISAMOS A TÚA AXUDA!

Buscamos xente que nos bote unha man puntualmente para a organización dunha charla ou obradoiro, que nos axude a atopar poñente ou se controla do tema, que el mesmo poida ser o poñente :)

Tedes máis información no seguinte enlace, esperamos contar convosco! ;)

 

AdjuntoTamaño
flyer_1.png289.27 KB

por gpul el November 16, 2015 03:39 PM

November 07, 2015

Importing include paths in Eclipse

First of all, let me be clear: no, I’m not trying to leave Emacs again, already got over that stage. Emacs is and will be my main editor for the foreseeable future, as it’s clear to me that there’s no other editor I feel more comfortable with, which is why I spent some time cleaning up my .emacs.d and making it more “manageable”.

But as much as like Emacs as my main “weapon”, I sometimes appreciate the advantages of using a different kind of beast for specific purposes. And, believe me or not, in the past 2 years I learned to love Eclipse/CDT as the best work-mate I know when I need some extra help to get deep inside of the two monster C++ projects that WebKit and Chromium are. And yes, I know Eclipse is resource hungry, slow, bloated… and whatnot; but I’m lucky enough to have fast SSDs and lots of RAM in my laptop & desktop machines, so that’s not really a big concern anymore for me (even though I reckon that indexing chromium in the laptop takes “quite some time”), so let’s move on 🙂

However, there’s this one little thing that still bothers quite me a lot of Eclipse: you need to manually setup the include paths for the external dependencies not in a standard location that a C/C++ project uses, so that you can get certain features properly working such as code auto-completion, automatic error-checking features, call hierarchies… and so forth.

And yes, I know there is an Eclipse plugin adding support for pkg-config which should do the job quite well. But for some reason I can’t get it to work with Eclipse Mars, even though others apparently can use it there for some reason (and I remember using it with Eclipse Juno, so it’s definitely not a myth).

Anyway, I did not feel like fighting with that (broken?) plugin, and in the other hand I was actually quite inclined to play a bit with Python so… my quick and dirty solution to get over this problem was to write a small script that takes a list of package names (as you would pass them to pkg-config) and generates the XML content that you can use to import in Eclipse. And surprisingly, that worked quite well for me, so I’m sharing it here in case someone else finds it useful.

Using frogr as an example, I generate the XML file for Eclipse doing this:

  $ pkg-config-to-eclipse glib-2.0 libsoup-2.4 libexif libxml-2.0 \
        json-glib-1.0 gtk+-3.0 gstreamer-1.0 > frogr-eclipse.xml

…and then I simply import frogr-eclipse.xml from the project’s properties, inside the C/C++ General > Paths and Symbols section.

After doing that I get rid of all the brokenness caused by so many missing symbols and header files, I get code auto-completion nicely working back again and all those perks you would expect from this little big IDE. And all that without having to go through the pain of defining all of them one by one from the settings dialog, thank goodness!

Now you can quickly see how it works in the video below:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/16TJ1zopjeY" width="560"></iframe>
VIDEO: Setting up a C/C++ project in Eclipse with pkg-config-to-eclipse

This has been very helpful for me, hope it will be helpful to someone else too!

por mario el November 07, 2015 12:35 AM

November 05, 2015

Somebody has changed all the system permissions

I originally submitted this post to Docker people in the celebration of the 2015 Sysadmin Day, and they selected it as one of their favorite war stories. Now I publish it in my own blog.

Some time ago I was working as Linux sysadmin in a major company. Our team were in charge of the operating system, but other teams were the applications administrators. So in some circumstances we allowed them some privilleged commands via sudo. The could do some services installs/patching in this manner.

One day I received a phone call from one of our users. He said me there was a server with a erratic behaviour. I tried to ssh on it. Connection refused. I tried to log in from the console, and I only could see weird messages.

So I boot the server in rescue mode with a OS iso. I mounted the filesystems. And I began to see someone was changed all the permissioms in all the system. I investigated for a while, I could discover who was the guilty, and the command that executed, a sudo chmod -R something /

How we can recover the server in a situation like this? With previous steps (changing some permissions on hand, chrooting) we do it using the rpm database:

for p in $(rpm -qa); do rpm --setperms $p; done
for p in $(rpm -qa); do rpm --setugids $p; done
We had a SUSE server in our case, so I did an additional step:

/sbin/conf.d/SuSEconfig.permissions
And… of course, I never had this problem if the application was jailed in a Docker container (and the user that run the chmod in the State Prison ;-))

el November 05, 2015 07:46 PM

October 19, 2015

GPUL participa nas Xornada de boas prácticas con Software Libre nas ONGs

Este xoves 22 de outubro, GPUL estará presente na I Xornada de boas prácticas con Software Libre nas ONGs e Entidades de Acción Social que se celebran na Cidade da Cultura en Santiago de Compostela a partires das 16:30 horas. Nesta xornada o noso compañeiro Emilio J. Padrón González (@emiliojpg)  e Ana Vázquez Fernández da Coordinadora Galega de ONGD impartirán un relatorio titulado "Experiencia de colaboración no terceiro sector para a migración a Software Libre" no que explicará a experiencia da colaboración de GPUL na Migración a Software Libre na Coordinadora Galega de ONGDs.

I Xornada de boas prácticas con Software Libre nas ONGs e Entidades de Acción Social

O principal froito desa colaboración de GPUL con organizacións como a Coordinadora Galega de ONGDs ou Enxeñería Sen Fronteiras Galicia foi a migración dos sistemas de ambas organizacións a Software Libre, cos que agora están a traballar.

No relatorio presentarase como foi o proceso de migración, que necesidades é preciso cubrir neste tipo de organizacións e algúns dos principais retos que xurdiron no mesmo.

É relativamente habitual observar como desde eidos nos que se defende e promove o uso de tecnoloxías libres e abertas —tanto polo aforro en custos que a súa adopción pode supoñer a medio e longo prazo como, sobre todo, pola independencia e soberanía tecnolóxica que permiten e a ética detrás do seu modelo de desenvolvemento— non se predica co exemplo, empregando tecnoloxías privativas no desempeño dese labor de promoción. Isto é frecuente en moitas organizacións adicadas ao Terceiro Sector, que seguen a traballar acotío con sistemas e ferramentas non libres.

Temos en Galicia un bo feixe de asociacións sen ánimo de lucro cunha ampla experiencia no uso e estudo do Software Libre, clasicamente coñecidas como LUGS ou GLUGS, do inglés de GNU/Linux User Group. Neste relatorio presentamos a experiencia de colaboración dun dos GLUGS que máis tempo leva funcionando en Galicia, o GPUL, con dúas organizacións do Terceiro Sector, Enxeñería Sen Fronteiras (ESF) e a Coordinadora Galega de ONGs para o Desenvolvemento, ás que asesora e axuda na xestión e mantemento das súas TIC.

AdjuntoTamaño
xornadas-3sector-mini.png166.03 KB

por gpul el October 19, 2015 11:03 AM

October 10, 2015

Running Vagrant on OpenSUSE

Some weeks ago Fedora Magazine published a post about running vagrant in Fedora 22 using the libvirt provider. But if you try to repeat the procedure in OpenSUSE you’ll have to perform some different steps because currently there is not a vagrant package at OpenSUSE (I use 13.2).

So you will:

tsao@mylaptop :~> sudo zypper in ruby ruby-devel

tsao@mylaptop :~> sudo rpm -Uvh https://dl.bintray.com/mitchellh/vagrant/vagrant_1.7.4_x86_64.rpm

tsao@mylaptop :~> sudo rpm -Uvh https://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1_x86_64.rpm

The most used virtualization provider for Vagrant is VirtualBox, so at this point you can run Virtalbox boxes if you was running VirtualBox vm before.

But, if you want to run libvirt boxes, you will do:

tsao@mylaptop :~> sudo zypper in qemu libvirt libvirt-devel qemu-kvm
tsao@mylaptop :~> vagrant plugin install vagrant-libvirt
tsao@mylaptop :~> systemctl enable libvirtd
tsao@mylaptop :~> systemctl start libvirtd
tsao@mylaptop :~> sudo usermod -a -G libvirt tsao

And, at this point, you can add and run Vagrant-libvirt boxes. Enjoy it :-)

Update, March 4th, 2016: Thanks to George J. Johnson for warning me about some typos.

el October 10, 2015 09:39 PM

October 07, 2015

Asamblea Xeral Extraordinaria de GPUL

Lugar: Aula 2.0a (Planta 2). Facultade de Informática da Coruña

Data: 14 de outubro de 2015

Primeira convocatoria: 19:30
Segunda convocatoria: 20:00

Orde do día:

- Lectura e aprobación, se procede, da Acta da Asemblea anterior.
- Lectura de altas e baixas de socios desde a última Asemblea.
- Lectura e aprobación, se procede, das Contas de 2014.
- Estado das Contas de 2015.
- Discusión e aprobación, se procede, de actividades a levar a cabo no ano 2016.
- Rogos e preguntas.

por gpul el October 07, 2015 09:21 PM

September 25, 2015

XI Xornadas de introducción a GNU/Linux e Software Libre para novos alumnos

E seguimos neste comezo de curso a tope, esta vez cun pequeno taller de introducción ao Software Libre e a GNU/Linux no que coma todos os anos, ensinarémosvos a todos os que o desexedes, os comandos básicos para traballar coa terminal en GNU/Linux e faremos unha pequena intro ao que é o software libre e porqué mola tanto.

O obradoiro terá lugar o próximo Martes 29 de Setembro no laboratorio 1.1 en dúas tandas:

Horario de maña: 12:00 - 13:30

Horario de tarde: 17:00 - 18:30

A entrada é totalmente libre e non é preciso apuntarse, así que esperámosvos!! :)

AdjuntoTamaño
intro_linux.png242.16 KB

por gpul el September 25, 2015 10:12 AM

September 19, 2015

Notes about time in UNIX and Linux systems (II): NTP

The second part of this post about time management I will write about the NTP and its daemon configuration. As I mentioned in the previous post, if you need a very accurate time the best option is using the ntp.org implementation of the protocol. If you need security over accuracy, then you can use OpenBSD project implementation. OpenNTPd is not a complete implementation of the protocol, but as usual in the OpenBSD software, it’s a good, well-documented, audited code.

NTP configuration

Tip: If you run GNU/Linux on virtual infraestructure, review the kernel boot parameters

Some years ago I had a problem with virtual machines that they weren’t able to syncrhonize with the NTP servers. The problem was solved reviewing this matrix at VMware.

Tip: Don’t forget opening the 123 port towards the NTP servers in your firewall.

There is a very simple /etc/ntp.conf file:

driftfile /var/lib/ntp/drift/ntp.drift # path for drift file
logfile   /var/log/ntp          # alternate log file
server server1
server server2

After “serverX” you can add some options on boot like iburst (RHEL6/7,SLES12) or dynamic (SLES11). These options help you to improve synchronization when the network is temporalily down and/or there is not name resolution.

Another interesting command is the driftfile, it helps to adjust the clock frequency on ntpd boot. Remember this file must be writtable by ntp user.

If you are configuring a SLES node, it’s easy to run yast. But maybe you are interested in doing a simple automated configuration, so you only want to touch the /etc/ntp.conf. You must disable NTP configuration at /etc/sysconfig/network/config, setting the policy parameter empty:

[...]
## Type:        string
## Default:     "auto"
#
# Defines the NTP merge policy as documented in netconfig(8) manual page.
# Set to "" to disable NTP configuration.
#
NETCONFIG_NTP_POLICY="auto"

## Type:        string
## Default:     ""
#
# List of NTP servers.
#
NETCONFIG_NTP_STATIC_SERVERS=""
[...]

As I said about configuring timezone in Exadata (RHEL5, 6?), the standard procedure is running /opt/oracle.cellos/ipconf tool.

But if you are tempted to reconfigure on /etc/ntp.conf and you make changes about ntp servers, you must restart the cellwall service after doing it. This is the firewall daemon enabled by default at the storage cells. When cellwall boots it scans /etc/ntp.conf file looking for the ntp servers in order to open the ports.

How to configure the NTP daemon

Tip: If you are running databases, you must use the slewing option (-x).

The slewing option is for avoiding abrupt time synchronizations. Time changes with great jumps are bad for db consistency, and very dangerous for some related services. As example, if you are running Oracle CRS and you have some seconds of error, you must stop all CRS processes (it’s not enough taking the node off the cluster) before making an on-hand NTP synchronization. If you don’t stop the CRS processes the synchronization can cause an outage.

SLES

The NTP daemon configuration is at /etc/sysconfig/ntp:

## Path:           Network/NTP
## Description:    Network Time Protocol (NTP) server settings
## Type:           string
## Default:        "-g -u ntp:ntp"
#
# Additional arguments when starting ntpd. The most
# important ones would be
# -u user[:group]   to make ntpd run as a user (group) other than root.
#
NTPD_OPTIONS="-g -u ntp:ntp"

## Type:           yesno
## Default:        yes
## ServiceRestart: ntp
#
# Shall the time server ntpd run in the chroot jail /var/lib/ntp?
#
# Each time you start ntpd with the init script, /etc/ntp.conf will be
# copied to /var/lib/ntp/etc/.
#
# The pid file will be in /var/lib/ntp/var/run/ntpd.pid.
#
NTPD_RUN_CHROOTED="yes"

## Type:           string
## Default:        ""
## ServiceRestart: ntp
#
# If the time server ntpd runs in the chroot jail these files will be
# copied to /var/lib/ntp/ besides the default of /etc/{localtime,ntp.conf}
#
NTPD_CHROOT_FILES=""

[...]

## Type:           boolean
## Default:        "no"
#
# Force time synchronization befor start ntpd
#
NTPD_FORCE_SYNC_ON_STARTUP="yes"

[...]

There are more options, but I think these are the most interesting: the ntpd options (there you can include the -x slewing option), chrooting (it improves the security of the daemon), and hard synchronization before booting the daemon.

If there is a difference between the current time in the machine and ntp servers larger than the tinker panic parameter sets (1000 secs by default), ntpd exits with error. But if you add the -g option means the daemon will synchronize on boot regardless the jump (only once at boot).

Be careful with NTPD_FORCE_SYNC_ON_STARTUP, your sensitive applications must boot after ntp to avoid time jumps.

It can be interesting too to enable the option NTPD_FORCE_SYNC_HWCLOCK_ON_STARTUP (if you enabled the last one), in order to have an accurate time at the hardware clock. Remember that’s the time the operating system takes on boot before starting the NTP daemon.

As you can see, in SLES chrooting is active by default. Remember this option needs some copied files in /var/lib/ntp and /proc bind mounted in the jail. Sometimes I use mondorescue for bare metal recovery, and I experienced some issues when I didn’t avoid the ntp jail in the backup.

After the daemon configuration, you have some options to run the daemon:

root@SLES10_or_11:~ # rcntp start
root@SLES12:~ # systemctl start ntpd
root@SLES10_11_12:~ # service ntp start 

Don’t forget to enable the daemon by default on OS boot:

root@SLES10_or_11:~ # chkconfig ntp 35 
root@SLES12:~ # systemctl enable ntpd 

RHEL

The RHEL config file /etc/sysconfig/ntpd is less documented by default than SLES one. This is the RHEL6 file:

# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid -g"

With the -x option (or if you added servers in /etc/ntp/step-tickers) the daemon won’t try to synchronize before booting the daemon. So, in RHEL6 if you want to do a hard sync before booting the ntpd, you must enable the ntpdate daemon too.

It’s a good idea to add the SYNC_HWCLOCK=yes to /etc/sysconfig/ntpd (or /etc/sysconfig/ntpdate if you enable ntpdate daemon) as we did with NTPD_FORCE_SYNC_HWCLOCK_ON_STARTUP option in SLES.

In RHEL7 the use of ntpdate is deprecated in this way, and it is used as time-sync.target provider like sntp. In the documentation, Red Hat advises to add After=time-sync.target in your sensitive services in order to avoid important jumps with the inital synchronization with these tools.

ntpd chrooting is disabled by default in RHEL. I found a procedure for RHEL6, it’s not automagic than SLES. You must:

And… after the configuration, you can enable and start the daemon:

root@RHEL5_or_6:~ # chkconfig ntpd on 
root@RHEL7:~ # systemctl enable ntpd 

root@RHEL5_or_6:~ # service ntpd start 
root@RHEL7:~ # systemctl start ntpd 

HP-UX

In HP-UX 11.31 coexists xntpd (by HP) and ntpd (free software) implementations. xntpd is not supported after April 1, 2014.

There is a configuration called /etc/rc.config.d/netdaemons. As you guess, you will find (x)ntpd daemon configuration there:

[...]
XNTPD_NAME=ntpd
export NTPDATE_SERVER=
export XNTPD=1
export XNTPD_ARGS="-x"
[...]

In order to enable the service, you can activate editing the file and setting XNTPD=1. The other way is running

root@myHPUX:/# ch_rc -a -p XNTP=1 
root@myHPUX:/# ch_rc -l -p XNTP   # show the status of xntp service on boot
And you start/stop the daemon in the classic way:
root@myHPUX:/# /sbin/init.d/xntpd start

AIX

In AIX the NTP daemon is enabled at the /etc/rc.tcpip with the main OS network daemons.

[...]
# Start up Network Time Protocol (NTP) daemon
start /usr/sbin/xntpd "$src_running" "-x"
[...]

As you can see, I added the -x option there. I could do it too in this way:

[root@myAIX /]# chssys -s xntpd -a "-x" # add the slewing option

[root@myAIX /]# chrctcp -S -a xntpd # -S start and -a enable the service

Start and check the xntpd status:

[root@myAIX /]# startsrc -s xntpd
[root@myAIX /]# lssrc -ls xntpd # check the service

Updated November 5th, 2015: If you upgrade from SLES11SP3 to SLES11SP4 and you have your ntpd chrooted, you will have a problem with the name resolution of the NTP servers. The cause is the update to ntpd > 4.2.7. You can fix it copying the needed files to the jail, but SUSE provided a /etc/ntp.conf default file with the needed options for backward compatibility doing nothing else.

el September 19, 2015 09:50 PM

September 15, 2015

Install party GNU/Linux

O próximo 24 de setembro, imos a colaborar dende GPUL coa Oficina de Software Libre do CIXUG para organizar un obradoiro de instalación de GNU/Linux para estudantes da Facultade de Informática da Universidade da Coruña.

O obxectivo do taller é a instalación e configuración do sistema operativo Ubuntu 12.04, que se atopa dispoñible nos laboratorios de prácticas da propia FIC.

Ademais, darase a coñecer as características máis importante do Software Libre, do sistema operativo instalado, e resolveranse todas aquelas preguntas que poidan ter sobre o tema.

O evento dará comezo ás 16:30 horas na aula 0.5w.

O acceso ao taller realizarase previa inscrición ata completar o aforo da aula (25 persoas):

http://osl.cixug.es/taller-de-instalacion-de-gnulinux-na-facultade-de-informatica-da-udc/

Correde apuntarvos!

AdjuntoTamaño
cartel_impresion.png238.18 KB

por gpul el September 15, 2015 01:53 PM

August 25, 2015

Examen de radioaficionado

A veces pienso que me gustaría escribir un "manual de radioaficionado" en español, porque no he encontrado mucho material por el estilo cuando quería prepararme para el examen de la licencia de radioaficionado española, y creo que al menos la mitad de la diversión inherente en aprender algo está en enseñárselo a otras personas. Aún así, eso sería un trabajo enorme y tardaría mucho tiempo en completarlo. Como de momento no tengo tiempo para ello, de momento he decidido preparar un miniexamen de ejemplo con el tipo de preguntas que podéis encontraros en el examen. Espero que os sea útil.

Examen de radioaficionado

1. En la banda de 10 metros, con modulación de banda lateral única, ¿cuál es el mayor ancho de banda permitido?
  A. 10 metros.
  B. Depende de si la banda es municipal o militar.
  C. Todo el disponible entre "Valencia" e "Islas Canarias".

2. ¿Para cuál de las siguientes funciones no puede utilizar un transistor?
  A. Conmutador.
  B. Mezclador.
  C. Escuchar el fútbol y los toros.

3. Para un transformador con 50 vueltas en el primario y 200 en el secundario, ¿cuál es la razón entre la impedancia de entrada y la de salida?
  A. Es una sinrazón.
  B. Más que cero y menos que infinito.
  C. ¿Por qué nos empeñamos en querer saber la razón, y no dejamos que el transformador haga libremente lo que quiera?

4. ¿Cuál es el límite permitido para las emisiones no deseadas?
  A. Depende de la frecuencia. Por ejemplo, todos los días sería pasarse.
  B. 35 decibelios de día y 30 de noche, medidos con las ventanas cerradas.
  C. Viendo la mierda que echan por la tele todos los días, mayor del que pensaba.

5. Dos personas situadas a 2500 km de distancia quieren comunicarse a mediodía en la cresta del ciclo solar. ¿Qué banda deberían utilizar?
  A. La banda de 2500 km.
  B. La banda de gaitas de la diputación de Ourense.
  C. La banda ancha de Internet.

6. ¿En qué distrito español se engloban las provincias de Barcelona, Girona, Lleida y Tarragona?
  A. El 3.
  B. ¡Número 1! ¡Siempre número 1!
  C. Pregúnteme el año que viene e igual la respuesta le sorprende.

7. ¿Cuál es el patrón de radiación de una antena Yagi de 4 elementos horizontales a 15 metros sobre el nivel del suelo y paralela a éste?
  A. ¿Radiación, dice usted?
  B. Mi madre querida, ¿en serio ha dicho radiación?
  C. Alta ganancia hacia el frente con nulos laterales y una relación de... ¿pero de verdad que ha dicho radiación?

8. ¿Cuál de las siguientes es una buena práctica a emplear con los repetidores?
  A. Decirles que son unos fracasados por repetir curso.
  B. No se me han ocurrido otras opciones graciosas para poner aquí.

9. ¿Qué es la frecuencia crítica?
  A. Una frecuencia que es incapaz de hacer nada propio pero igual opina sobre lo que hacen los demás.
  B. La frecuencia por debajo de la cual uno no se baña suficientemente a menudo.
  C. Probablemente uno de esos programas de tertulia de la radio.

10. ¿De qué se compone el código Morse?
  A. De puntos y rayas.
  B. De pitos y flautas.
  C. De M, O, R, S y E.

por jacobo el August 25, 2015 06:00 AM

August 14, 2015

I/O limits for disk groups in QEMU 2.4

QEMU 2.4.0 has just been released, and among many other things it comes with some of the stuff I have been working on lately. In this blog post I am going to talk about disk I/O limits and the new feature to group several disks together.

Disk I/O limits

Disk I/O limits allow us to control the amount of I/O that a guest can perform. This is useful for example if we have several VMs in the same host and we want to reduce the impact they have on each other if the disk usage is very high.

The I/O limits can be set using the QMP command block_set_io_throttle, or with the command line using the throttling.* options for the -drive parameter (in brackets in the examples below). Both the throughput and the number of I/O operations can be limited. For a more fine-grained control, the limits of each one of them can be set on read operations, write operations, or the combination of both:

Example:

-drive if=virtio,file=hd1.qcow2,throttling.bps-write=52428800,throttling.iops-total=6000

In addition to that, it is also possible to configure the maximum burst size, which defines a pool of I/O that the guest can perform without being limited:

One additional parameter named iops_size allows us to deal with the case where big I/O operations can be used to bypass the limits we have set. In this case, if a particular I/O operation is bigger than iops_size then it is counted several times when it comes to calculating the I/O limits. So a 128KB request will be counted as 4 requests if iops_size is 32KB.

Group throttling

All of these parameters I’ve just described operate on individual disk drives and have been available for a while. Since QEMU 2.4 however, it is also possible to have several drives share the same limits. This is configured using the new group parameter.

The way it works is that each disk with I/O limits is member of a throttle group, and the limits apply to the combined I/O of all group members using a round-robin algorithm. The way to put several disks together is just to use the group parameter with all of them using the same group name. Once the group is set, there’s no need to pass the parameter to block_set_io_throttle anymore unless we want to move the drive to a different group. Since the I/O limits apply to all group members, it is enough to use block_set_io_throttle in just one of them.

Here’s an example of how to set groups using the command line:

-drive if=virtio,file=hd1.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd2.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd3.qcow2,throttling.iops-total=3000,throttling.group=bar
-drive if=virtio,file=hd4.qcow2,throttling.iops-total=6000,throttling.group=foo
-drive if=virtio,file=hd5.qcow2,throttling.iops-total=3000,throttling.group=bar
-drive if=virtio,file=hd6.qcow2,throttling.iops-total=5000

In this example, hd1, hd2 and hd4 are all members of a group named foo with a combined IOPS limit of 6000, and hd3 and hd5 are members of bar. hd6 is left alone (technically it is part of a 1-member group).

Next steps

I am currently working on providing more I/O statistics for disk drives, including latencies and average queue depth on a user-defined interval. The code is almost ready. Next week I will be in Seattle for the KVM Forum where I will hopefully be able to finish the remaining bits.

I will also attend LinuxCon North America. Igalia is sponsoring the event and we have a booth there. Come if you want to talk to us or see our latest demos with WebKit for Wayland.

See you in Seattle!

por berto el August 14, 2015 10:22 AM

August 11, 2015

Notes about time in UNIX and Linux systems (I): time zones

I decided to write about handling the time at UNIX/Linux systems because a message like this:

[29071122.262612] Clock: inserting leap second 23:59:60 UTC

I have similar logs in my servers last June 30, 2015. Of course, I was aware about it some months before and I have to do some work to be ready (kernel/ntpd upgrades depending on the version of the package, we work with 8 main releases of 3 GNU/Linux distributions). In previous leap seconds some issues affected to Linux servers all around the world. As I was praying for it, nothing happened after the leap second insertion, and I went to sleep deeply. But that has been one of the rare situations in which having different tiers (development/integration/staging/production) doesn’t means nothing because you test all the environments at the same time.

Ok, let’s go. We need an accurate time for a server. It’s important -specially, in database servers. So we will use the NTP daemon. In RHEL7 we could use chronyd, but the recomendation for servers with a stable time source is using ntpd.

But, of course, if we didn’t it before (maybe in the OS installation) we will need to adjust the time zone.

GNU/Linux

In previous RHEL/SLES major releases we must edit /etc/sysconfig/clock:

TIMEZONE="Europe/Madrid"
UTC=true
The meaning of the first option is clear (and we see our possibilities at /usr/share/zoneinfo). The second option points out the hardware clock has UTC configuration.

But this configuration requires rebooting the node, and sometimes it’s not possible. So, in order to take effect imediately, we run this command:

root@tardis:~ # ln -s /usr/share/zoneinfo/Europe/Madrid /etc/localtime

In SLES of course you can use YaST for this task too.

If we are working on RHEL7/SLES12, we need to deal with Skyne^W^W systemd. And it’s easy, we only need running:

root@tardis:~ # timedatectl set-timezone Europe/Madrid
root@tardis:~ # timedatectl
      Local time: mar 2015-08-11 16:29:00 CEST
  Universal time: mar 2015-08-11 14:29:00 UTC
        RTC time: mar 2015-08-11 14:29:00
       Time zone: Europe/Madrid (CEST, +0200)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: yes
 Last DST change: DST began at
                  dom 2015-03-29 01:59:59 CET
                  dom 2015-03-29 03:00:00 CEST
 Next DST change: DST ends (the clock jumps one hour backwards) at
                  dom 2015-10-25 02:59:59 CEST
                  dom 2015-10-25 02:00:00 CET
In these versions, we can read the timezones list:

root@tardis:~ # timedatectl list-timezones
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
Africa/Algiers
Africa/Asmara
Africa/Bamako
...

There is a peculiar case with the Oracle Exadata product. The first time you face this hardware+software stack, maybe will be tempted to manage it like another RHEL server. But if you read the Exadata documentation (you have the PDF’s in /usr/share/doc…) you’ll see there are additional consistency layers (so be careful installing packages from other distributions ;-)).

For example, at the storage cells you will stop the services, and run the /opt/oracle.cellos/ipconf utility. When you finish the changes are reflected at the usual config files and at /opt/oracle.cellos/cell.conf

...
  <Ntp_drift>/var/lib/ntp/drift</Ntp_drift>
  <Ntp_servers>ntpserver1</Ntp_servers>
  <Ntp_servers>ntpserver2</Ntp_servers>
  <System_active>non-ovs</System_active>
  <Timezone>Europe/Madrid</Timezone>
  <Version>12.1.2.1.0</Version>
...
</Cell>

(It’s a 11.2 cell xml configuration file; the new releases have a different sintax)

At the compute nodes the configuration change must be done at /etc/sysconfig/clock, but you must stop and disable the crs before the change.

Here is the full configuration guide for Exadata 12c RC1 components.

HP-UX

In HP-UX 11.11, 11.23 and 11.31 the timezone configuration resides at the /etc/TIMEZONE script:

TZ=MET-1METDST
export TZ
You can edit the file or run:

set_parms timezone

There are two parameters, timezone and dst at kernel level you can touch for legacy applications. They are no longer used.

AIX

In AIX the standard method for configurations is using smit. So you can run smitty and go to the System Environments menu. The changes are reflected in the file /etc/environment

You will get noticed in AIX 5.3 the configuration is a bit more complex. In this version you must configure the DST, etc. There is a guide at IBM’s web.

In the next chapter I will cover ntpd administration.

el August 11, 2015 02:20 PM

July 07, 2015

Old habits, new times

Today I begin a new blog.

This will be my third project. More than 12 years ago three friends began linuxbeat.net. Juanjo, Cañete and me wrote about technology, the University where we were studying, politics… That was the age when people socialised at blog level, you could trace a social network following the links in the blogs to other blogs.

Most of them were written in free services like Blogspot, Photoblog… People left behind the unconfortable, ugly, poorly updated static pages of the 90’s, and new hobbyists and experts in different areas (but with no idea on web developing) began to write and enrich the Wide World Web.

But as we were technology fanboys (we were active members of GPUL, the Coruña Linux users group), we rent a spanish hosting, and we installed and configured our Wordpress via ssh.

In 2005 I launched my weblog alone. My domain was enelparaiso.org, there I built my personal quasi-static page (it was generated by a Wikka Wiki engine), and a blog (Wordpress again). I spent some weeks until I found a cheap hosting in Canada that allowed ssh administration.

At umask 077 The Flight of an Albatross I wrote 393 posts on Tech, Civil Engineering, Philosphy, Politics, Solidarity, Religion, Jazz, Poetry… The rhythm I wrote was high in the first years, but as my jobs were more and more demanding, I progressively abandoned the blog. It happened at the same time I began to use the modern social networks: Facebook, G+ (do you remember Orkut?), Identi.ca, Twitter, Diaspora…

I use social networks today like I used the blog before: I vent my thoughts, and I maintain communication with friends and family. So, why begin a new blog?

A blog is a perfect opportunity to procrastinate put my thoughs in order. Nowadays I have a very demanding work. Sometimes I have to delay investigating/improving methods or procedures because my daily workload. So I’ll try to force myself to stop at least once in a week to write about my job, sharing my experiences.

And as it’s a new age, the infraestructure under will be diferent. This new web is hosted in a AWS EC2 instance. And, as in the age of cloud computing we have to improve performance, I will go back to use static pages. Of course you will see they are a bit prettier than the pure html pages we wrote in the 90’s. Now I use hugo, a static web generator written in Go, with the theme hugo-uno, written by Fredrik Loch.

I wish it will be useful to you :-)

el July 07, 2015 08:41 PM

July 03, 2015

On Linux32 chrooted environments

I have a chrooted environment in my 64bit Fedora 22 machine that I use every now and then to work on a debian-like 32bit system where I might want to do all sorts of things, such as building software for the target system or creating debian packages. More specifically, today I was trying to build WebKitGTK+ 2.8.3 in there and something weird was happening:

The following CMake snippet was not properly recognizing my 32bit chroot:

string(TOLOWER ${CMAKE_HOST_SYSTEM_PROCESSOR} LOWERCASE_CMAKE_HOST_SYSTEM_PROCESSOR)
if (CMAKE_COMPILER_IS_GNUCXX AND "${LOWERCASE_CMAKE_HOST_SYSTEM_PROCESSOR}" MATCHES "(i[3-6]86|x86)$")
    ADD_TARGET_PROPERTIES(WebCore COMPILE_FLAGS "-fno-tree-sra")
endif ()

After some investigation, I found out that CMAKE_HOST_SYSTEM_PROCESSOR relies on the output of uname to determine the type of the CPU, and this what I was getting if I ran it myself:

(debian32-chroot)mario:~ $ uname -a
Linux moucho 4.0.6-300.fc22.x86_64 #1 SMP Tue Jun 23 13:58:53 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux

Let’s avoid nasty comments about the stupid name of my machine (I’m sure everyone else uses clever names instead), and see what was there: x86_64.

That looked wrong to me, so I googled a bit to see what others did about this and, besides finding all sorts of crazy hacks around, I found that in my case the solution was pretty simple just because I am using schroot, a great tool that makes life easier when working with chrooted environments.

Because of that, all I would have to do would be to specify personality=linux32 in the configuration file for my chrooted environment and that’s it. Just by doing that and re-entering in the “jail”, the output would be much saner now:

(debian32-chroot)mario:~ $ uname -a
Linux moucho 4.0.6-300.fc22.x86_64 #1 SMP Tue Jun 23 13:58:53 UTC 2015
i686 i686 i686 GNU/Linux

And of course, WebKitGTK+ would now recognize and use the right CPU type in the snippet above and I could “relax” again while seeing WebKit building again.

Now, for extra reference, this is the content of my schroot configuration file:

$ cat /etc/schroot/chroot.d/00debian32-chroot
[debian32-chroot]
description=Debian-like chroot (32 bit) 
type=directory
directory=/schroot/debian32/
users=mario
groups=mario
root-users=mario
personality=linux32

That is all, hope somebody else will find this useful. It certainly saved my day!

por mario el July 03, 2015 01:31 PM

June 05, 2015

Los comienzos son duros... y divertidos

Muchos sabréis ya, que GPUL se fundó en 1998. Como los frik^M^M miembros de tantos otros LUGs (Linux Users Groups) que nacieron en aquella época, vivimos una época muy especial, en la que Internet iba llegando poco a poco y con un irrisorio ancho de banda a nuestros hogares. Entonces, nos veíamos mucho más las caras, y también las listas de correo tenían una actividad salvaje.  Había discusiones técnicas, ayuda mutua, nuevas ideas, experimentos más o menos exitosos... Mucha de esa actividad se vivía en el despacho 0.05, en la planta -1 de la Facultad de Informática de la UDC.

A finales de 2006 las cosas habían cambiado mucho para la mayoría de nosotros. Gente que empieza a trabajar, otros que seguimos intentando acabar la carrera, entran nuevas generaciones... y alguien desde las altas esferas decide que ya va siendo hora de que dejemos el despacho a un profesor, y nos vayamos a incordiar a otro lado :-D Ese otro lado es el despacho que hoy conocéis, que, pese a estar bajo la escalera de emergencia y al lado de los baños masculinos :-P, tiene la ventaja de que caben bastantes más cosas, lo que nos ha venido muy bien para montar eventos mucho más grandes y profesionales, como la GUADEC2012 o la próxima Akademy 2015.

El caso es que, antes de recoger nuestros bártulos y abandonar nuestro despacho de toda la vida, tuve la idea de contar allí mismo cómo habían sido aquellos primeros tiempos, a través de la voz de sus protagonistas. Hay unas cuantas decenas de horas de grabaciones caseras, con ruido de fondo de los ventiladores de los servidores, con iluminación variable... hechas a prisas, a finales de aquel 2006. Mi idea original era hacer una única película-documental; intentaría tenerla para el décimo aniversario de la asociación, pero por circunstancias la cosa se fue postergando *mucho*.

Hace algún tiempo tuvimos la suerte de conocer a Brân González Patiño, al que secuest^M^M convertimos en nuestro experto en audiovisuales, por su formación y trabajos profesionales. Brân no sólo aportó la técnica: tuvo la idea de transformar esas polvorientas grabaciones en una serie documental, le dotó el ritmo adecuado, y la necesaria perspectiva distante, para que no se quedara sólo en un elemento de nostalgia para nosotros. Pretendemos contar una historia local, pero que a la vez resulte universal, de tal manera que personas que no nos conocen pero vivieron experiencias similares se sientan rápidamente identificadas. De ahí su título: «GPUL: historia de un LUG cualquiera»

Iremos actualizando este post con los episodios según se vayan publicando. Que lo disfrutéis :-)

Capítulo 1: El nacimiento de GPUL

<iframe frameborder="0" height="480" src="https://archive.org/embed/historia_capitulo1" width="640"></iframe>


Capítulo 2: El despacho 0.05

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitulo2" width="640"></iframe>


Capítulo 3: Experimentando con Linux

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitulo3" width="640"></iframe>


Capítulo 4: Cambio de ciclo

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitul4" width="640"></iframe>

Episodio resumen

<iframe frameborder="0" height="480" src="https://archive.org/embed/Gpul_historia_resumen" width="640"></iframe>

 

Episodio 5: Nuevos tiempos

<iframe frameborder="0" height="480" src="https://archive.org/embed/gpul_historia_capitulo5" width="640"></iframe>

 

 

por tsao el June 05, 2015 03:21 PM

May 27, 2015

Mirando (mucho) atrás...

Ya tenemos el trailer de la serie documental sobre la historia de GPUL. Y en unos días... el primer episodio :-)

 

<iframe frameborder="0" height="480" src="https://archive.org/embed/trailer_historiadeunlugcualquiera" width="640"></iframe>

por tsao el May 27, 2015 09:37 PM

May 25, 2015

La palabra es “restomod”

“Restomod”. Es una palabra que no conocía hasta hace poco, y que tiene mucho que ver con los coches que pongo por aquí.

Qué es un restomod

“Restomod” es un acrónimo que viene de restored y modified (o modernized, según dónde preguntes). No hay una definición precisa, pero el consenso viene a ser éste: se considera restomod un coche clásico que, manteniendo un aspecto muy cercano al original, ha sido modificado con componentes modernos para hacerlo más seguro y/o mejorar sus prestaciones.

Según el libro How to build Ford Restomod street machines,  quien acuñó el término fue Jim Smart, de la revista Mustangs & Ford Machines en 1995. Le ayudó a ello Ron Bramlett, dueño de la tienda Mustangs Plus, que en 2001 patentó la palabra.

Durante los años 80 la moda entre los dueños de Mustangs era dejarlos tal y como habían salido de fábrica. La caza de componentes originales era su mayor afición. Pero en los 90 empezaron a surgir “herejes” que añadían a sus coches modificaciones adicionales. Al principio eran sólo modificaciones que se podían deshacer fácilmente, por si había que volver a dejar el coche como estaba. Los productos de estos cambios se llamaron restomods, coches restored pero modified. Poco a poco las modificaciones se hicieron más radicales: nuevos motores, nuevas ruedas, cambios de asientos e interior, nuevos sistemas de sonido … Los coches seguían siendo Mustangs, pero ya no se podían considerar clásicos. Aunque habían sido restaurados, también habían sido modificados más allá de lo que los ingenieros de Ford habían planeado.

El ejemplo que hizo el término conocido para el gran público fue uno que ya ha salido por aquí: Eleanor, el Mustang de 60 segundos.

Que, en realidad, no es un restomod. Este coche es un modelo moderno construido expresamente para la película. La película “60 segundos” protagonizada por Nicholas Cage es un remake de otra de 1974, del mismo nombre (Gone in 60 seconds), en la que sale el “Eleanor” original:

“Eleanor” de 1974 (fuente: ford-life.com)

¿Cuánto mod hace falta para un restomod?

El tema de la mayor parte de discusiones sobre restomods en Internet es cuánto hace falta modificar un coche clásico para considerarlo un restomod. ¿Llega con cambiar las ruedas? ¿Hay que cambiar el motor? ¿Vale un kit que sólo cambia algo de la apariencia del coche, como los retrovisores o la parrilla del radiador?

Jay Leno, dueño de una de las mayores colecciones de coches clásicos, decía en un artículo para Popular Mechanics:

Take my two 1925 Doble steam cars. They weigh 6000 pounds and move pretty well but only have rear brakes. That’s insane. I put brake drums on the front, with Corvette disc brakes hidden inside them. Now I can comfortably drive my Dobles, because they reliably stop.

Éste es un caso curioso de restomodding, porque el coche del que habla Leno es un coche de 1925 … ¡a vapor! Éste, en concreto:

Jay Leno en su 1925 Doble Model E Roadster (fuente: www.classiccarsblog.com)

Jay Leno en su 1925 Doble Model E Roadster (fuente: www.classiccarsblog.com)

Sin embargo, el coche mantiene su aspecto original, salvo porque se ven los frenos de disco delanteros. Es un restomod de libro.

Pero Jay Leno también tiene coches más convencionales a los que aplica otras modificaciones:

I went much further with my just-restored Ford Galaxie. While it looks completely original, it’s an all-new car underneath. The suspension now moves with improved trailing arms, a Panhard rod to limit rear-axle sway, oversize antiroll bars, beefed-up mounting brackets and stiffer, polyurethane bushings, all from a suspension company called Hotchkis. The sloppy recirculating-ball steering was replaced with a precise rack-and-pinion setup. Wilwood cross-drilled and vented disc brakes grace all four corners. In the engine room, there’s a fuel-injected 511-cubic-inch Jack Roush V8 backed by a Tremec six-speed gearbox. We wrapped the old pieces in paper and put them on a shelf in case we ever want to return the car to its original condition.

Como se puede leer en la última línea, Leno conserva la posibilidad de revertir el coche a su estado original si hiciera falta. Y de hecho, su Ford Galaxie no parece un restomod hasta que levantas el capó, como se ve en el vídeo a continuación.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://www.youtube.com/embed/V-BL7G5m98M?feature=oembed" width="640"></iframe>

Restomods con más mod que resto

Hay otros casos en los que no es así, porque las modificaciones que se han hecho son demasiado grandes. Para esos tipos de modificaciones hay otros nombres: “Pro-Touring”, “Pro-Street”, “Rat Rod”, “Hot Rod” …

Pero ésos me los guardo para otro día. Por hoy, quédense con que la palabra es “restomod”.

Referencias

por admin el May 25, 2015 08:05 PM

May 23, 2015

New album of Diffie-Hellman! xDIncludes the hit LogJam !!!



New album of Diffie-Hellman! xD

Includes the hit LogJam !!!

por amhairghin el May 23, 2015 07:27 PM

May 19, 2015

Google I/O

Os compañeiros do GDG Coruña están a montar unha boa para o gran evento anual de Google, o Google I/O e en GPUL tamén queremos colaborar xunto coa asociación Árticos!

Durante os días 28 e 29 de Maio imos emitir en streaming diversas charlas do evento, debates, obradoiros e grupos de traballo para celebrar o evento para desenvolvedores do ano de Google.

Tedes máis información na páxina do GDG, o evento terá lugar o día 28 de 17:00 a 23:00 e o 29 en horario de maña, aínda por definir.

O primeiro día imos ver a Keynote en directo, faremos un Hackathon para desenvolver aplicacións móbiles para o escornabot - (www.escornabot.com) e veremos outras charlas sobre Google TV, Android Wear, etc.

O segundo día imos seleccionar algunha charla e comentala xunto con outros obradoiros e eventos.

É importante, aínda que non necesario, se estades interesados en asistir que vos apuntedes no evento para preever a asistencia e programar mellor o evento.

AdjuntoTamaño
cartel_impresion.png185.12 KB
liberar o teu traballo.png245.19 KB

por gpul el May 19, 2015 06:20 PM

May 17, 2015

The Galician's dilemma

Today's the day that commemorates Galician literature, the "Día das Letras Galegas", so it's obviously time to write about more Galician weird stuff. This is something you'll encounter if you share a meal with Galicians.

Let's first set the scene: you are having lunch, or perhaps dinner, in Galicia, with Galicians. As Galicians are wont to do, multiple serving trays are brought to the table, and everybody takes from them whatever they'd like to eat. After a couple of hours, the table is full of serving trays, all of which have one morsel left. Around the table, many Galicians talk and joke, trying to appear nonchalant while they eye the left-over portions of food greedily, obviously wanting to eat them. Yet they never touch them.

This situation is called "a vergoña do galego", which can be translated literally as "the Galician's shame", but I think a better translation would be "the Galician's dilemma". It goes like this:

Initially, the serving trays are full of food, and they circulate around the table so everyone can take a portion commensurate to how hungry they are, how much they like that particular food, and now many other trays full of food they expect to see during the meal. At the end of the first round, anyone who wants seconds can just call for a tray and serve themselves. However, as the amount of food in each tray diminishes, a secondary consideration starts to take hold: "what if someone else wants this food too?" So, when they go for seconds, or thirds, people will usually serve themselves less food than they'd actually like, so that there's still enough for someone else who may want it.

This situation reaches its logical conclusion when there's only one portion left in the serving tray. At this point, the desire to eat the food is less powerful than the dread of depriving someone else from eating that morsel. As a result, multiple trays will be on the table, each one displaying a single morsel of food that somebody wants to eat and nobody dares to touch. This situation often reach ridiculous levels, where you could have trenchers with only one solitary slice of octopus, or dishes displaying one piece of raxo and one potato chip.

Galicians recognize and acknowledge this phenomenon, so they've developed some coping strategies. For example, at a restaurant, when a waiter needs to remove the serving trays, they'll just choose one of the diners and have a conversation like this:

"How did you like the octopus?"
"Ah, it was wonderful."
"So you won't mind finishing it up for me, I need to take the trencher away." (Removes last slice of octopus from trencher, puts it into Galician's plate, takes trencher away.)

At this point the dilemma is solved, because it was the waiter, not you, who put the food in your plate. What can you do about it? Nothing, of course. May as well just eat it.

Another solution to the dilemma involves having the presence of someone who is not Galician. Non-Galicians are exempted from the dilemma, and not only are they allowed to take the last morsel without fear of repercussion, they will actually be encouraged to.

"Ah, only one portion of empanada left!"
"Yes, this is the Galician's dilemma." (Explanation of the dilemma follows.)
"But you are not Galician, so it doesn't affect you, so just take it!"

Savvy non-Galicians may even just go ahead unprompted and cut the Gordian knot of the Galician dilemma:

"Is this the last prawn?"
"Yes, it is."
"Oh well, I'm not Galician, so..." (Takes it.)

Galicians being cognizant of the dilemma, they won't resent the person taking the last portion, and may even thank them for it.

When there are no non-Galicians around, the situation can require a bit of negotiation and diplomacy:

"So, why is there a Padrón pepper left?"
"The Galician's dilemma!"
"I know, but it needs to go."
"You can take it if you want it."
"Don't be absurd! It's clearly saying your name."

Etc., etc.

por jacobo el May 17, 2015 07:35 PM

La vuelta de las patentes de software en España

En el año 2005, desde el Grupo de Progamadores y Usuarios de Linux nos movilizamos contra la "Directiva de la Unión Europea sobre patentabilidad de invenciones implementadas por ordenador"[0]. Acompañamos en esta protesta a otros grupos de usuarios y desarrolladores de software libre de Europa, junto a profesionales, profesores y alumnos de profesiones relacionadas con las TICs.

En nuestro caso, tuvimos que realizar un arduo trabajo. Siendo GPUL una asociación eminentemente de estudiantes, teníamos hacer comprender al alumnado de la Facultad de Informática de la UDC los problemas que supondrían la aprobación de una directiva de este calibre, y conseguir su implicación. Por otra parte, complicado era polemizar con un importante sector del profesorado y el personal investigador. Ciertos sectores de la Facultad eran claramente favorables a la directiva, como también lo eran hacia los modelos de software privativo; sin embargo, también nos sorprendió el rechazo, más sutil e hipócrita, de algunas personas que, con una mano agarraban y abrazaban el softwar libre, y con la otra, estaban patentando todo lo que podían, intentando quedarse para sí tecnología pagada con los fondos públicos a los que todos contribuímos. Cierto es que uno de los vicios de la investigación universitaria en España son los "premios" que el estado proporciona en forma de financiación y ascensos cuando un grupo de investigación registra patentes en sus actividades (como también premian artículos en inglés en revistas de supuesto alto impacto, algo que ha generado un "mercado de influencias" interesante).

Al final, nuestra protesta tuvo eco, incluso en los medios generalistas locales. Y conseguimos que la Junta de Facultad aprobara un comunicado rechazando la directiva europea. Y, pese a las presiones de los lobbies, la directiva no salió adelante, con 648 votos en contra frente a 14 a favor y 18 abstenciones en el Parlamento Europeo.

Como era de esperar, los grupos de presión que impulsaron esta iniciativa, no se quedaron quietos. Y, pasados los años, vuelven a atacar, esta vez a nivel nacional [1][2]. Como alerta el veterano activista pro software libre, el teniente coronel Fernando Acero Martín, el nuevo proyecto de ley sobre patentes en España[3], modificando de forma sutil la ley vigente de 1985, abre una puerta trasera para la patentabilidad de software en nuestro país [4][5].

Diez años no nos han vuelto idiotas. Los que llevamos mucho tiempo en el movimiento por el software libre, seguimos siendo conscientes de la gravedad de este tema. Y, pese a que los catetos tecnológicos que tenemos por legisladores y gobernantes nos intenten vender lo contrario, sabemos lo que la aprobación de cualquier vía para la patentabilidad del software supone. Y es que, recordemos que hoy la mayoría de los servidores de internet, los dispostivos móviles, muchos "cacharros inteligentes" de uso cotidiano, están funcionando gracias al trabajo de miles de desarrolladores que, de forma individual, en pequeños colectivos de hackers, desde las PyMEs y las grandes empresas del sector, han desarrollado los sistemas operativos y aplicaciones con los que funcionan. Este trabajo sería extremadamente complicado o incluso imposible teniendo que soportar las trabas legales y asumiendo los costes legales y administrativos de estar continuamente a la defensiva en lo que se convertiría una guerra contínua, donde los que ganan son  grandes empresas y corporaciones, que usan las patentes de software en los países que lo consienten, no como base de desarrollo tecnológico y empresarial, sino como arma de destrucción masiva disparada por sus caros gabinetes legales, contra su competencia, sea esta grande o pequeña.

En definitiva, este es, de nuevo, un momento para la lucha. Lucha no sólo por el movimiento del software libre: lucha por la defensa del futuro tecnológico de nuestro país. Para que no nos devuelvan a la Edad Media como algunos parecen pretender.

[0]http://es.wikipedia.org/wiki/Directiva_de_la_Uni%C3%B3n_Europea_sobre_pa...
[1]http://www.rtve.es/noticias/20150309/nueva-ley-patentes-preve-proteger-s...
[2]http://www.eldiario.es/turing/software_libre/patentes-acechan-software-v...
[3]https://intranet.congreso.es/portal/page/portal/Congreso/PopUpCGI?CMD=VE...
[4]http://fernando-acero.livejournal.com/98919.html
[5]http://www.eldiario.es/turing/software_libre/PSOE-UPyD-unicos-patentar-s...

por tsao el May 17, 2015 03:27 PM

May 15, 2015

I got myself an address stamp

I got myself an address stamp.

My address, stamped once in a piece of paper.

It is fun to use.

My address, stamped several times in a piece of paper.

It is indeed quite fun to use.

Several pieces of paper covered in stampings of my address.

I think I may need to buy a new ink pad soon.

por jacobo el May 15, 2015 03:57 PM

April 30, 2015

Some rain-related Galician sayings

Some time ago I wrote a post about some popular sayings in the English language. Today it's time to talk about a couple of funny sayings in the Galician language.

As you may know, I'm from Spain, but when I tell people I always specify that I'm from the part of Spain where it's rarely sunny and people aren't particularly fond of flamenco. Then people often say "oh, Basque?" and I explain that the Basque Country is in the North, while I'm from Galicia, in the North-West. In Galicia we have our own language, fittingly called "Galician", which is related to Portuguese (they were one and the same language until the 14th century, though there are many people who claim they still are.)

Galicia is notorious in Spain because it's way rainier than the rest of the country. Its capital is Santiago de Compostela, my hometown, which is notorious in Galicia because it's way rainier than the rest of the region. So I assume it wouldn't surprise you if rain featured heavily in our popular sayings. This post, in fact, is about three of those sayings.

The first one is a proverb: "nunca choveu que non escampara", which means "it's never rained for so long that it didn't eventually stop". For my region, that's quite an uncharacteristically optimistic saying that means that bad things don't last forever, so there's no need to despair. Or perhaps it's just that it rains so relentlessly that people need to be reminded that it will stop.

The second one is something you say to someone who's acting foolish or making little sense. "A ti chóveche", literally means "it's raining on/in you". You can say it too of a third person: "a ese home chóvelle" ("it's raining in that man"). I'm guessing it's short for "a ti chóveche na cabeza" ("it's raining inside your head"), which to me is quite evocative. It's basically saying that this person's head is so empty there's enough room for water to evaporate, gather into clouds, condensate and precipitate in the form of free-falling drops of water. That's quite a lot of emptiness.

The third and final one for today is "xa choveu", which means "it has rained [quite a bit since then]". You say it to express that quite a long time has elapsed since something. For example, you show someone a photo of your childhood, and this conversation ensues:

"Mira que delgado estaba nesta foto." ("Look how thin I was in this photo.")
"Xa choveu." ("It's been quite a while since.")
"Vai tomar polo cu." ("I resent that remark.")

The last sentence is not translated literally, because I've often observed that English speakers have a lower tolerance for profanity than Galician speakers :-)

For now, that's it for rain-related Galician language sayings. I should probably write a post about Galician language profanity, since we have quite a bit of it, and it's quite creative even for rest-of-Spain standards :-)

(Post your comments in the accompanying Google+ post.)

por jacobo el April 30, 2015 03:49 PM

April 01, 2015

Xoves Libres: Modelos de negocio colaborativos

Modelos de negocio colaborativos. Por que o Software Libre domina no mercado?

Dende GPUL sempre estamos tratando de mostrar que isto do Software Libre aporta moreas de valores e vantaxes fronte as alternativas privativas pero fai tempo que queríamos aproveitar esta auxe do emprendemento TIC en Galicia para mostrarlles a aqueles que están a por en marcha un proxecto empresarial relacionado co software, que existen alternativas aos modelos de negocio tradicionais, respectando as liberdades do posible cliente e incluso axilizando o desenvolvemento e a expansión dunha iniciativa empresarial baseada en software libre.

É por isto que o próximo xoves 9 de Abril imos ter as 17:00 no xa mítico laboratorio 0.1w da Facultade de Informática a un auténtico experto como é Roberto Brenlla nisto dos modelos de negocio con software libre.

Roberto estudou económicas na Universidade de Santiago de Compostela e foi un dos fundadores da empresa galega Tegnix, adicada a consultoría baseada en software libre e a xestión de sistemas en entornos empresariais, traballando tamén en proxectos para a administración pública como o proxecto Abalar da Xunta de Galicia con máis de 75.000 equipos administrados remotamente.

Presidíu a Asociación de Empresas Galegas de Software Libre (AGASOL) durante varios anos e leva toda a súa vida adicado ao software libre, converténdose nun experto no tema e actualmente traballa como Freelance experto en estratexias Software Libre e Open Source.

AdjuntoTamaño
cartel_modelos_negocio.png201.66 KB

por gpul el April 01, 2015 09:50 PM

March 30, 2015

Xoves Libres: Wikimaraton da Galipedia na UDC

Dende GPUL xa temos preparada unha batería de eventos para que o próximo mes teñades todos os xoves ocupados ;)

De momento imos adiantando que imos colaborar coa Facultade de Informática na organización dunha Wikimaratón da Galipedia que durará dóus días e se enmarcará no ciclo dos Xoves Libres. Na seguinte nota de prensa tedes máis información, ídevos apuntando que quedades sen prazas!!

 

 

Os vindeiros 16 e 17 de abril de 2015 terán lugar na Facultade de Informática da Universidade da Coruña  as Primeiras xornadas "Coñece a Galipedia na UDC", unha mestura entre unha xornada de portas abertas e unha wikimaratón da Galipedia. Estas xornadas, promovidas pola Facultade de Informática da Universidade da Coruña, GPUL e a Galipedia, co apoio do Servizo de Normalización Lingüística da Universidade da Coruña, pretenden achegar a enciclopedia libre en galego a tódolos interesados. Os actos contarán cunha charla de introdución á edición da Galipedia e con actividades máis avanzadas como a introdución ó manexo de bots.Tamén se fará un pequeno wikimaratón de artigos relacionados coa Universidade da Coruña e outros temas, como a informática, e terá lugar a entrega de premios e a presentación dos traballos gañadores do concurso Wikinformática, enfocado á visibilización do papel da muller nas TIC entre os estudantes de secundaria.

Para poder aproveitar este evento recoméndase asistir, xa que os participantes terán a oportunidade de coñecer a outros wikipedistas e darlle visibilidade ó proxecto. Non obstante, tamén é posible participar online. Para ambos tipos de participación, é necesario ter creado un usuario, podes facelo aquí, e inscribirse na lista de participantes.

 

Máis información en https://gl.wikipedia.org/wiki/Wikipedia:Primeiras_xornadas_Co%C3%B1ece_a_Galipedia_na_UDC

Programa de actividades

Xoves 16 de abril

  • 17:00-17:30 Introdución á Galipedia e á edición co Editor visual (Elisardo Juncal, Galipedia)
  • 17:00-17:15 Planificación e distribución do traballo de edición
  • 17:15-19:30 Edición dos artigos propostos ou doutros de interese para os asistentes
  • 19:30-19:45 Avaliación da xornada

Venres 17 de abril

  • 16:00-17:00 Entrega de premios e presentación dos traballos de Wikinformática
  • 17:00-17:30 Introdución ó uso de bots na Galipedia (Roi Ardao López "Banjo", Galipedia)
  • 17:00-17:15 Planificación e distribución do traballo de edición
  • 17:15-19:30 Edición dos artigos propostos ou doutros de interese para os asistentes
  • 19:30-19:45 Avaliación da xornada

por gpul el March 30, 2015 03:52 PM

Bringing sanity back to my T440s

As a long time Thinkpad’s trackpoint user and owner of a Lenovo T440s, I always felt quite frustrated with the clickpad featured in this laptop, since it basically ditched away all the physical buttons I got so used to, and replace them all with a giant, weird and noisy “clickpad”.

Fortunately, following Peter Hutterer’s post on X.Org Synaptics support for the T440, I managed to get a semi-decent configuration where I basically disabled any movement in the touchpad and used it three giant soft buttons. It certainly took quite some time to get used to it and avoid making too many mistakes but it was at least usable thanks to that.

Then, just a few months ago from now, I learned about the new T450 laptops and how they introduced again the physical buttons for the trackpoint there… and felt happy and upset at the same time: happy to know that Lenovo finally reconsidered their position and decided to bring back some sanity to the legendary trackpoint, but upset because I realized I had bought the only Thinkpad to have ever featured such an insane device.

Luckily enough, I recently found that someone was selling this T450’s new touchpads with the physical buttons in eBay, and people in many places seemed to confirm that it would fit and work in the T440, T440s and T440p (just google for it), so I decided to give it a try.

So, the new touchpad arrived here last week and I did try to fit it, although I got a bit scared at some point and decided to step back and leave it for a while. After all, this laptop is 7 months old and I did not want to risk breaking it either :-). But then I kept reading the T440s’s Hardware Maintenance Manual in my spare time and learned that I was actually closer than what I thought, so decided to give it a try this weekend again… and this is the final result:

T440s with trackpoint buttons!

Initially, I thought of writing a detailed step by step guide on how to do the installation, but in the end it all boils down to removing the system board so that you can unscrew the old clickpad and screw the new one, so you just follow the steps in the T440s’s Hardware Maintenance Manual for that, and you should be fine.

If any, I’d just add that you don’t really need to remove the heatskink from the board, but just unplug the fan’s power cord, and that you can actually do this without removing the board completely, but just lifting it enough to manipulate the 2 hidden screws under it. Also, I do recommend disconnecting all the wires connected to the main board as well as removing the memory module, the Wifi/3G cards and the keyboard. You can probably lift the board without doing that, but I’d rather follow those extra steps to avoid nasty surprises.

Last, please remember that this model has a built-in battery that you need to disable from the BIOS before starting to work with it. This is a new step compared to older models (therefore easy to overlook) and quite an important one, so make sure you don’t forget about it!

Anyway, as you can see the new device fits perfectly fine in the hole of the former clickpad and it even gets recognized as a Synaptics touchpad, which is good. And even better, the touchpad works perfectly fine out of the box, with all the usual features you might expect: soft left and right buttons, 2-finger scrolling, tap to click…

The only problem is that the trackpoint’s buttons would not work that well: the left and right buttons would translate into “scroll up” and “scroll down” and the middle button would simply not work at all. Fortunately, this is also covered in Petter Hutterer’s blog, where he explains that all the problems I was seeing are expected at this moment, since some patches in the Kernel are needed for the 3 physical buttons to become visible via the trackpoint again.

But in any case, for those like me who just don’t care about the touchpad at all, this comment in the tracking bug for this issue explains a workaround to get the physical trackpoint buttons working well right now (middle button included), simply by disabling the Synaptics driver and enabling psmouse configured to use the imps protocol.

And because I’m using Fedora 21, I followed the recommendation there and simply added psmouse.proto=imps to the GRUB_CMDLINE_LINUX line in /etc/default/grub, then run grub2-mkconfig -o /boot/grub2/grub.cfg, and that did the trick for me.

Now I went into the BIOS and disabled the “trackpad” option, not to get the mouse moving and clicking randomly, and finally enabled scrolling with the middle-button by creating a file in /etc/X11/xorg.conf.d/20-trackpoint.conf (based on the one from my old x201), like this:

Section "InputClass"
        Identifier "Trackpoint Wheel Emulation"
        MatchProduct "PS/2 Synaptics TouchPad"
        MatchDriver "evdev"
        Option  "EmulateWheel"  "true"
        Option  "EmulateWheelButton" "2"
        Option  "EmulateWheelInertia" "10"
        Option  "EmulateWheelTimeout" "190"
        Option  "Emulate3Buttons" "false"
        Option  "XAxisMapping"  "6 7"
        Option  "YAxisMapping"  "4 5"
EndSection

So that’s it. I suppose I will keep checking the status of the proper fix in the tracking bug and eventually move to the Synaptic driver again once all those issue get fixed, but for now this setup is perfect for me, and definitely way better than what I had before.

I only hope that I hadn’t forgotten to plug a cable when assembling everything back. At least, I can tell I haven’t got any screw left and everything I’ve tested seems to work as expected, so I guess it’s probably fine. Fingers crossed!

por mario el March 30, 2015 01:32 AM

March 26, 2015

Building a SNES emulator with a Raspberry Pi and a PS3 gamepad

It’s been a while since I did this, but I got some people asking me lately about how exactly I did it and I thought it could be nice to write a post answering that question. Actually, it would be a nice thing for me to have anyway at least as “documentation”, so here it is.

But first of all, the idea: my personal and very particular goal was to have a proper SNES emulator plugged to my TV, based on the Raspberry Pi (simply because I had a spare one) that I could control entirely with a gamepad (no external keyboards, no ssh connection from a laptop, nothing).

Yes, I know there are other emulators I could aim for and even Raspberry specific distros designed for a similar purpose but, honestly, I don’t really care about MAME, NeoGeo, PSX emulators or the like. I simply wanted a SNES emulator, period. And on top of that I was quite keen on playing a bit with the Raspberry, so I took this route, for good or bad.

Anyway, after doing some investigation I realized all the main pieces were already out there for me to build such a thing, all that was needed was to put them all together, so I went ahead and did it. And these are the HW & SW ingredients involved in this recipe:

Once I got all these things around, this is how I assembled the whole thing:

1. Got the gamepad paired and recognized as a joystick under /dev/input/js0 using the QtSixA project. I followed the instructions here, which explain fairly well how to use sixpair to pair the gamepad and how to get the sixad daemon running at boot time, which was an important requirement for this whole thing to work as I wanted it to.

2. I downloaded the source code of PiSNES, then patched it slightly so that it would recognize the PS3 DualShock gamepad, allow me define the four directions of the joystick through the configuration file, among other things.

3. I had no idea how to get the PS3 gamepad paired automatically when booting the Raspberry Pi, so I wrote a stupid small script that would basically wait for the gamepad to be detected under /dev/input/js0, and then launch the snes9x.gui GUI to choose a game from the list of ROMS available. I placed it under /usr/local/bin/snes-run-gui, and looks like this:

#!/bin/bash

BASEDIR=/opt/pisnes

# Wait for the PS3 Game pad to be available
while [ ! -e /dev/input/js0 ]; do sleep 2; done

# The DISPLAY=:0 bit is important for the GUI to work
DISPLAY=:0 $BASEDIR/snes9x.gui

4. Because I wanted that script to be launched on boot, I simply added a line to /etc/xdg/lxsession/LXDE/autostart, so that it looked like this:

@lxpanel --profile LXDE
@pcmanfm --desktop --profile LXDE
@xscreensaver -no-splash
@/etc/sudoers.d/vsrv.sh
@/usr/local/bin/snes-run-gui

By doing the steps mentioned above, I got the following “User Experience”:

  1. Turn on the RPi by simply plugging it in
  2. Wait for Raspbian to boot and for the desktop to be visible
  3. At this point, both the sixad daemon and the snes-run-gui script should be running, so press the PS button in the gamepad to connect the gamepad
  4. After a few seconds, the lights in the gamepad should stop blinking and the /dev/input/js0 device file should be available, so snes9x.gui is launched
  5. Select the game you want to play and press with the ‘X’ button to run it
  6. While in the game, press the PS button to get back to the game selection UI
  7. From the game selection UI, press START+SELECT to shutdown the RPi
  8. Profit!

Unfortunately, those steps above were enough to get the gamepad paired and working with PiSNES, but my TV was a bit tricky and I needed to do a few adjustments more in the booting configuration of the Raspberry Pi, which took me a while to find out too.

So, here is the contents of my /boot/config.txt file in case it helps somebody else out there, or simply as reference (more info about the contents of this file in RPiConfig):

# NOOBS Auto-generated Settings:
hdmi_force_hotplug=1
config_hdmi_boost=4
overscan_left=24
overscan_right=24
overscan_top=16
overscan_bottom=16
disable_overscan=0
core_freq=250
sdram_freq=500
over_voltage=2

# Set sdtv mode to PAL (as used in Europe)
sdtv_mode=2

# Force sound to be sent over the HDMI cable
hdmi_drive=2

# Set monitor mode to DMT
hdmi_group=2

# Overclock the CPU a bit (700 MHz is the default)
arm_freq=900

# Set monitor resolution to 1280x720p @ 60Hz XGA
hdmi_mode=85

As you can imagine, some of those configuration options are specific to the TV I have it connected to (e.g. hdmi_mode), so YMMV. In my case I actually had to try different HDMI modes before settling on one that would simply work, so if you are ever in the same situation, you might want to apt-get install libraspberrypi-bin and use the following commands as well:

 $ tvservice -m DMT # List all DMT supported modes
 $ tvservice -d edid.dat # Dump detailed info about your screen
 $ edidparser edid.dat | grep mode # List all possible modes

In my case, I settled on hdmi_mode=85 simply because that’s the one that work better for me, which stands for the 1280x720p@60Hz DMT mode, according to edidparser:

HDMI:EDID DMT mode (85) 1280x720p @ 60 Hz with pixel clock 74 MHz has a score of 80296

And that’s all I think. Of course there’s a chance I forgot to mention something because I did this in my random slots of spare time I had back in July, but that should be pretty much it.

Now, simply because this post has been too much text already, here you have a video showing off how this actually works (and let alone how good/bad I am playing!):

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/e30DJ2Ym9Ow" width="560"></iframe>

Video: Raspberry Pi + PS3 Gamepad + PiSNES

I have to say I had great fun doing this and, even if it’s a quite hackish solution, I’m pretty happy with it because it’s been so much fun to play those games again, and also because it’s been working like a charm ever since I set it up, more than half a year ago.

And even better… turns out I got it working just in time for “Father’s Day”, which made me win the “best dad in the world” award, unanimously granted by my two sons, who also enjoy playing those good old games with me now (and beating me on some of them!).

Actually, that has been certainly the most rewarding thing of all this, no doubt about it.

por mario el March 26, 2015 01:46 AM

March 24, 2015

Hacklab Impresión 3D (nova sesión)

Dende GPUL prácenos presentar a continuación do HackLab de impresión 3D.
A próxima sesión terá lugar o xoves 26 de marzo no laboratorio 0.1w ás 17:00 da Facultade de Informática da Coruña.

Dende GPUL agradecemos que se estiveras interesado neste HackLab e non te inscribiche xa, o fagas tan pronto poidas no formulario do evento. Aínda que non poidas acudir a esta sesión.

Xa dende hai tempo se está a ver unha gran evolución no mundo do
hardware libre, cunha maior presencia na sociedade. Froito desta
evolución son as impresoras 3d e o proxecto Clone Wars que
documentan como crear a túa propia impresora 3D.

A idea do HackLab é a de crear un grupo de traballo cooperativo de carácter aberto e fomentando a aprendizaxe autónoma e cooperativa de tódolos membros.

Recordamos que aínda que se está a montar a impresora 3D para GPUL o evento está aberto a que se tes unha impresora para montar a traias e poidamos aprender todos xuntos como funciona este mundo.

Gracias e contamos contigo nesta andaina.

AdjuntoTamaño
cartel_impresion.png301.12 KB

por gpul el March 24, 2015 12:26 PM

March 20, 2015

GStreamer Hackfest 2015

Last weekend I visited my former office in (lovely) Staines-upon-Thames (UK) to attend the GStreamer hackfest 2015, along with other ~30 hackers from all over the world.

This was my very first GStreamer hackfest ever and it was definitely a great experience, although at the beginning I was really not convinced to attend since, after all, why bother attending an event about something I have no clue about?

But the answer turned out to be easy in the end, once I actually thought a bit about it: it would be a good opportunity both to learn more about the project and to meet people in real life (old friends included), making the most of it happening 15min away from my house. So, I went there.

And in the end it was a quite productive and useful weekend: I might not be an expert by now, but at least I broke the barrier of getting started with the project, which is already a good thing.

And even better, I managed to move forward a patch to fix a bug in PulseAudio I found on last December while fixing an downstream issue as part of my job at Endless. Back then, I did not have the time nor the knowledge to write a proper patch that could really go upstream, so I focused on fixing the problem at hand in our platform. But I always felt the need to sit down and cook a proper patch, and this event proved to be the perfect time and place to do that.

Now, thanks to the hackfest (and to Arun Raghavan in particular, thanks!), I’m quite happy to see that the right patch might be on its way to be applied upstream. Could not be happier about it! 🙂

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/74vZ3ODUo00" width="560"></iframe>

Last, I’d like to thank to Samsung’s OSG, and specially to Luis, for having done a cracking job on making sure that everything would run smoothly from beginning to end. Thanks!

por mario el March 20, 2015 06:22 PM

March 10, 2015

He conseguido arreglar mi transmisor de radio

(Publicada originalmente en inglés en Google+).

Hoy he mirado atentamente a la señal que salía de mi antena y, sí, al final resultó que la mayor parte de la energía se gastaba fuera de la frecuencia en la que quería transmitir. Sin embargo, no fue debido a que las imágenes y transmisiones espúreas se llevaran la mayor parte de la energía... fue debido a que estaba transmitiendo en la frecuencia equivocada.

Seguid leyendo, que no se trataba de un simple caso de manazas girando la rueda de selección de frecuencias.

En el mundo de la radio definida por software (SDR, por las siglas en inglés) trabajamos con receptores y transmisores bastante básicos, cuya función es recibir una señal de radio y transformarla en una forma en la que el ordenador pueda procesarla, y viceversa. Para conseguirlo, se utilizan muchas matemáticas en el ordenador, usando algo a lo que llamamos "números complejos". Estos números complejos tienen dos componentes: una parte real y una parte imaginaria (el nombre "imaginaria" lo recibió cuando los matemáticos aún no entendían bien los números complejos y pensaban que estos números eran menos "reales" que los números que habían estado utilizando hasta aquel momento. Pero me estoy yendo por las ramas).

Por lo tanto, un receptor de radio SDR toma una señal de radio, realiza ciertas manipulaciones electrónicas y genera dos señales que el ordenador convierte en números complejos. Una de estas señales, llamada I, se convierte en las partes reales de esos números, y la otra señal, llamada Q, se convierte en las partes imaginarias.

Para transmitir, el ordenador toma las partes reales de los números complejos y genera una señal I, y también genera la correspondiente señal Q a partir de las partes imaginarias de los mismos números. Estas dos señales van al transmisor SDR, sufren ciertas transformaciones electrónicas, y se convierten en una señal de radio.

El problema que tenía yo es que los cables de las señales I y Q estaban cruzados en el transmisor.

El efecto resultante de este cruce de cables es que la señal salía transmitida por una frecuencia incorrecta. Normalmente tenía mi "frecuencia central" fijada en 14,080 MHz y mi frecuencia de transmisión fijada en 14,097 MHz. Sin embargo, como I y Q estaban cruzados, el transmisor tomó la parte real como imaginaria y la parte imaginaria como real, lo que hizo que la señal transmitida se "reflejase" alrededor de la frecuencia central y acabase teniendo una frecuencia de 14,063 MHz. ¡Ups!

Por suerte, estaba utilizando una antena de bucle magnético, que tiene un ancho de banda limitado (esto lo expliqué en la historia de ayer), así que es bastante probable que sólo haya salido transmitida una pequeña fracción de la energía de esta señal, y el resto se haya convertido en calor en la antena.

Ahora bien, no sé si lo he mencionado, pero ya se habían recibido mis señales unas tres o cuatro veces durante las anteriores dos semanas. Además, al utilizar una radio de onda corta para escuchar las transmisiones, podía oír mi señal con toda claridad en la frecuencia correcta. Es natural preguntarse cómo puede ser esto posible, si la señal estaba siendo emitida con la frecuencia equivocada.

La solución es simple: antes de que los números complejos producidos por el ordenador lleguen al transmisor, antes tienen que pasar por una tarjeta de sonido que los convierte en las señales eléctricas I y Q, y luego viajan hasta el transmisor por cables de audio. Cables de audio estéreo, en particular, que tienen tres hilos: un hilo de tierra, un hilo para el altavoz izquierdo, y un hilo para el altavoz derecho. En esta configuración, uno de los hilos de los altavoces porta la señal I y el otro hilo porta la señal Q. Cuando hay dos hilos paralelos por los que circulan corrientes alternas, se inducen pequeñas corrientes mutuamente, lo que causa una pequeña interferencia.

Con esta interferencia, cuando las señales llegan al transmisor de radio, el hilo de la señal I también porta un poquito de señal Q y viceversa, por lo que cuando se produce la transmisión acaba habiendo una pequeña señal en la frecuencia deseada originalmente. Esta señal podía yo oírla en mi radio porque estaba bastante cerca de la antena, y a veces se oía desde más lejos porque estaba utilizando WSPR, un modo digital para señales de baja potencia, así que la gente que recibía esas señales ya estaba esperando recibir transmisiones muy débiles.

Después de arreglar este fallo envié otra transmisión mediante WSPR. En esta ocasión me oyeron cuatro estaciones; la más lejana de ellas estaba en Alaska. Mi segunda señal se recibió en Nueva Zelanda. Ambas transmisiones se realizaron con 1 vatio de potencia. Creo que puedo decir con seguridad que tanto el transmisor como mi antena casera funcionan correctamente :)

Ahora sólo me queda arreglar mi amplificador de 50 vatios.

por jacobo el March 10, 2015 04:38 AM

I fixed my radio transmitter

(This was originally posted on Google+.)

Well, I finally looked carefully at the signal coming out of my antenna and, yes, it turned out that most of the energy was being spent outside of the frequency I wanted to transmit on. However, it wasn't because of images and spurious transmissions that took most of the energy... It was because I was actually transmitting on the wrong frequency!

Keep reading because it wasn't just a case of "herp, derp, turned the knob to the wrong position."

In Software Defined Radio we work with quite basic transceivers whose mission is to receive a radio signal and transform it into a form a computer can process, and vice versa. To do that, a lot of mathematics are used in the computer, using something we call "complex numbers". Those complex numbers have two components: a real part and an imaginary part (the name "imaginary" was given to it when mathematicians didn't understand complex numbers yet, so they thought those numbers were less "real" than the numbers they had been using up to that point, but I digress).

So an SDR radio receiver takes a radio signal, does some electronic manipulations, and generates two signals that are converted into complex numbers by the computer. One of those signals, called I, will become those numbers' real parts, and the other signal, called Q, will become the imaginary parts.

When it's time to transmit, the computer takes the complex numbers' real parts and generates an I signal, and also generates a corresponding Q signal from the same numbers' imaginary parts. Those two signals go to the SDR transmitter, undergo some electronic transformations, and become a radio signal.

The problem with my setup was that the wires for the I and Q signals were crossed in the transmit path.

The effect of this is that the signal went out on the wrong frequency. Generally I had my "center frequency" set to 14.080 MHz and my transmit frequency set to 14.097 MHz. However, as I and Q were crossed, the transmitter took the real part as imaginary and the imaginary part as real, which mirrored the transmitted signal around the center frequency, and my transmission went out on 14.063 MHz. Oops!

Fortunately for me, I was transmitting with a small magnetic loop antenna, which has limited bandwidth (I explained this yesterday), so it is likely that only a very small fraction of this out-of-frequency signal actually went on the air, and the rest was converted to heat in the antenna.

Now, I don't know if I've mentioned this, but my signals had actually been received three or four times in the past two weeks. Also, when I used a shortwave radio to listen to my transmissions, I could hear my signal clearly on the right frequency. It's natural that you'd ask how this would be possible, if the signal was going out on the wrong frequency.

The answer is simple: before the complex numbers coming out of my computer get into the transmitter, they first go through a sound card, where they get turned into the I and Q electrical signals, and then they travel to the transmitter through audio cables. Stereo audio cables, to be precise, that have three wires: a ground wire, a wire for the left speaker, and a wire for the right speaker. In this setup, one of the speaker wires carries the I signal, and the other carries the Q signal. When you have two wires running in parallel carrying alternating currents, they induce small currents on each other, which causes a little bit of crosstalk.

So, with this crosstalk, by the time the signals arrive at the radio transmitter, the wire for the I signal will also carry a little bit of Q signal mixed in and vice versa, and when it's finally transmitted, there will be a tiny signal in the originally intended frequency. I could hear it on my radio because it was very close to the antenna, and it was sometimes heard farther away because I was using WSPR (pronounced "whisper"), a digital mode intended for low-power signals, so the people who received those signals were already listening for very faint transmissions.

After fixing this issue I sent another transmission through WSPR. This time, four stations heard me at once; the farthest of those stations was in Alaska. My second signal was heard in New Zealand. Both were transmitted with 1 watt of power. I think it is safe to say that both my transmitter and my homemade antenna work correctly now :)

And now I have a 50-watt amplifier to fix.

por jacobo el March 10, 2015 04:38 AM

March 09, 2015

Mejoras en mi antena de radioaficionado casera

(Publicado originalmente en inglés en Google+.)

Ayer trabajé un poco en mi antena de bucle magnético. Una forma más correcta de denominarla sería "antena de bucle magnético pequeño", o SMLA en sus siglas en inglés. Consiste en un cable que forma un círculo y está conectado a un condensador variable. El bucle de cable forma una inductancia, que junto con la capacitancia del condensador forma un circuito resonante. Realizando las conexiones adecuadas se puede utilizar una SMLA para recibir y transmitir señales en la frecuencia de resonancia de la SMLA, que se puede variar girando una rueda para variar la capacitancia del condensador.

Como podéis imaginar, cuanto más resonante sea la antena, mejor es: las señales presentes en esa frecuencia se amplifican más. Además, cuando la antena es más resonante tiene un ancho de banda menor: la energía presente en la antena se concentra en una banda de frecuencias más estrecha. La cantidad de resonancia viene dada por un número llamado "factor de calidad", o Q. Al factor Q lo afectan los valores de la inductancia, capacitancia y resistencia de la SMLA. En particular, cuanto menor sea la resistencia, mayor será el valor de Q. Al fabricar una SMLA es importante reducir lo máximo posible la resistencia eléctrica para obtener el mejor valor de Q.

Hay otro motivo por el que es importante reducir la resistencia eléctrica si queremos fabricar una SMLA capaz de transmitir: la resistencia a la radiación de la antena es muy pequeña, en el orden de miliohmios, así que cualquier resistencia adicional reduce muchísimo la eficiencia de la antena.

La gente como yo, que estamos acostumbrados a tratar con corrientes continuas, tendemos a pensar que para lograr esto tenemos que utilizar cables de calibre grueso, soldar todas las conexiones para reducir las pérdidas de contacto, etc. Hace un par de semanas medí la resistencia de mi SMLA y obtuve un valor de 50 miliohmios, que no suena tan mal; sin embargo, el factor Q de mi antena parecía bastante bajo y nadie era capaz de oír mis transmisiones.

De lo que no me había dado cuenta es de que las corrientes alternas (y las ondas de radio que circulan por un cable son corrientes alternas) no viajan ocupando toda la sección del cable, como lo hacen las corrientes continuas: existe un fenómeno llamado "efecto pelicular" por el que esas corrientes sólo circulan por la superficie del conductor (por su "piel"). Cuando mayor es la frecuencia, menos profundidad tiene esa piel: por ejemplo, en el cobre, a 14 MHz, la mayor parte de la corriente circula a menos de 17 micrómetros de profundidad.

La primera consecuencia de este efecto es que la resistencia de un cable no se reduce con el cuadrado del diámetro de su sección como ocurre con corrientes continuas, sino que se reduce linealmente con el diámetro. Por lo tanto, el uso de cable de grueso calibre no ayuda mucho. Lo que hay que utilizar en su lugar es cinta de cobre, ya sea plana o trenzada. La cinta trenzada tiene mucha área de superficie para su volumen, así que debería presentar una resistencia baja a la corriente alterna.

La segunda consecuencia es que se deben evitar las soldaduras: ya que la corriente circula por la superficie, los puntos de la superficie que estén cubiertos de estaño tendrán una conductividad menor que la superficie de cobre desnudo.

Teniendo esto en cuenta, ayer rehice las conexiones entre el bucle de cable y el condensador de mi SMLA, sustituyendo los cables de grueso calibre por cintas de cobre trenzadas. Las conecté utilizando tornillos y arandelas de manera que estuviesen bien apretadas contra los terminales del bucle y del condensador, asegurándome de que se esté tocando toda el área de superficie posible.

Con este cambio parece que ha aumentado el factor Q de mi SMLA: ahora puedo utilizar un ancho de banda de unos 40 kHz antes de tener que resintonizar la antena, mientras que antes podía utilizar unos 60 kHz. Esperaba ver también una mejora en el rendimiento en transmisión, pero lamentablemente nadie oyó mis transmisiones en todo el día de hoy. Supongo que mi antena aún no es lo bastante buena.

Puede que haya otra explicación a esta incapacidad de hacerme oír, no obstante. Usando una radio de onda corta pude oir señales espúreas alrededor de la señal que quería transmitir, y usando un receptor RTLSDR pude ver el espectro de radio que rodea a la frecuencia en la que mi transmisor estaba sintonizado, y había muchas espúreas e imágenes durante la transmisión. No sé si es un fallo de mi transmisor en particular o un fallo de diseño. En cualquier caso, esto me sugiere que tal vez se esté desperdiciando mucha energía en esas espúreas. Eso es definitivamente algo que tendré que investigar de nuevo a fondo.

por jacobo el March 09, 2015 06:56 AM

Improving my homemade amateur radio antenna

(This was originally posted on Google+.)

Yesterday I worked a bit on my magnetic loop antenna. More properly called a "small magnetic loop antenna" (or SMLA for short), it basically consists of a long loop of wire connected to a variable capacitor. The loop of wire forms an inductance, which together with the capacitance forms a resonant circuit. So by wiring it appropriately, you can use a SMLA to receive and transmit signals in the SMLA's resonant frequency, which you can change by turning a knob to vary the capacitance.

As you might guess, the more resonant the antenna is, the better it works: the signals in that frequency are amplified more. Also, when the antenna is more resonant it has narrower bandwidth: the energy in the antenna is concentrated into a smaller band of frequencies. The amount of resonance is given by a number called "quality factor" (or Q, for short). Q is affected by the values of the inductance, capacitance, and resistance in the SMLA. In particular, the lower the resistance, the higher Q is. So if you make an SMLA you need to reduce the electric resistance as much as you can to get the best value of Q.

There is another reason why it's important to reduce the electric resistance if you want to make a transmitting SMLA: the antenna's radiation resistance is very small, in the order of milliohms, so any additional resistance reduces the antenna's efficiency dramatically.

People like me, who are used to dealing with continuous currents, would think that it would be enough to use wide-gauge wiring, solder all connections to reduce contact losses, etc. A couple of weeks ago I measured my SMLA's resistance as 50 milliohms, which doesn't sound so bad; however my antenna's Q factor seemed quite low and my transmissions were heard by nobody.

What I'd missed is that alternating currents (and radio waves in a cable are alternating currents) don't travel along the full section of the cable, like continuous currents: there's a phenomenon called "skin effect" by which those currents only travel on the surface of the conductor. The higher the frequency, the shallower the skin: for example, in copper, at 14 MHz, most of the current is concentrated at a depth of less than 17 micrometers.

The first consequence of this is that the resistance of a wire doesn't go down with the square of the diameter of its section as for continuous currents, but linearly with the diameter. So using wide gauge wire doesn't help much. What you need to use instead is flat ribbon or, even better, copper braid: braid has a lot of surface area for its volume, so it should present a low resistance to alternating current.

The second consequence is that you should avoid solder joints: since the current travels on the surface, spots where the surface is tinned will have a lower conductivity than the bare copper surface.

So yesterday I remade the connections between the loop and the capacitor in my SMLA, replacing the 10-AWG wires with copper braid ribbons. I fastened them using screws and washers so that they were pressed against the terminals on the ends of the loop and the capacitor, making sure that as much surface area as possible touches.

This change has apparently raised my SMLA's Q factor: I can work on about 40kHz before having to retune, while before I could use some 60kHz. I hoped that transmit performance would also be improved, but, alas, nobody heard my transmissions the whole day today. I guess my antenna is not good enough yet.

There may be another explanation for this failure to be heard, though. Using a shortwave receiver I could hear spurious signals around the signal I wanted to transmit. Using an RTLSDR dongle I could see the spectrum around the frequency my transceiver was tuned to, and there were lots of spurs and images on transmit. I don't know if it's a fault in the particular transceiver kit I'm using, or whether it's a drawback of the design itself. In any case, this suggests to me that perhaps too much energy is being wasted on those spurs. That's certainly something I'll need to look at again and more carefully.

por jacobo el March 09, 2015 06:56 AM

March 07, 2015

¿Pensas en participar no GSOC ou no Outreachy?

Volven os XovesLibres de GPUL un ano máis a Facultade de Informática da UDC repetindo charla motivacional sobre o Google Summer of Code e outros programas similares cos que poderás contribuir o software libre, facer curriculum e encima que che paguen por elo mentras estás estudando.

O próximo xoves 12 de marzo as 17:00 organizaremos no Lab 0.1w da Facultade de Informática da Coruña unha pequena charla co obxectivo de animar á xente interesada a participar en programas de mentorización no desenvolvemento de software libre, coma o Google Summer of Code, o Outreachy (antigo Outreach Program for Women), o Google Code-In, etc.

Nesta actividade, persoas que participaron en anos anteriores falarán da súa experiencia, e darán consellos a hora de participar nestos programas.

A continuación deixámosvos uns enlaces interesantes:

AdjuntoTamaño
cartel_gsoc.png380.33 KB

por gpul el March 07, 2015 01:25 PM

Por qué a una señal de radio en código Morse la llamamos Onda Continua a pesar de que se enciende y se apaga

(Publicada originalmente en inglés en Google+).

Como muchos sabréis, últimamente me estoy dedicando a la radioafición. En este mundillo, el código Morse todavía se utiliza mucho, aunque ya no es necesario aprenderlo para sacarse la licencia de radioaficionado. Cuando dos operadores utilizan el código Morse para comunicarse, muchas veces utilizan un modo llamado "Onda Continua".

Durante bastante tiempo pensé que el de "Onda Continua" era un nombre bastante raro para un sistema de transmisión de código Morse. Podía constatar que, en efecto, hay una onda, que es la onda de radio sobre la que se modula el código Morse. Lo que no veía muy claro era el motivo del uso del adjetivo "continua". Al fin y al cabo, la onda se está encendiendo y apagando todo el rato; es precisamente la forma en la que se puede transmitir Morse. Si se enciende y se apaga, la onda no es continua. ¿Qué era lo que me faltaba por saber?

Lo que me faltaba por saber era que antes de que existiese la Onda Continua, ya había código Morse en la radio, que se transmitía utilizando un tipo de onda distinta: la Onda Amortiguada.

Una Onda Continua es una onda senoidal con una frecuencia determinada. Hoy en día nos es muy fácil producir ondas senoidales precisas y estables utilizando circuitos electrónicos bastante baratos. Sin embargo, en los primeros días de la radio no era así: no había circuitos osciladores electrónicos lo suficientemente buenos como para producir ondas continuas de calidad, así que las emisoras de radio utilizaban un mecanismo diferente para producir un tipo de ondas de radio diferente.

Este mecanismo era el transmisor a chispa. La idea general es que cuando hay un alto voltaje entre dos conductores separados por un espacio vacío se produce un arco eléctrico (una chispa). El transmisor contiene un circuito que, al prenderse el arco, produce una oscilación resonante, como el sonido de una campana golpeada por su badajo. Esta oscilación se pasa a una antena para transmitirla en forma de onda de radio, llamada "onda amortiguada" porque pierde amplitud con el tiempo, igual que el tañido de una campana.

Como la onda amortiguada sólo dura una minúscula fracción de segundo, el espacio en el que se produce la chispa está fabricado de tal manera que estas chispas se extinguen justo después de encenderse, y una nueva chispa se prende casi de inmediato, lo que produce una nueva onda amortiguada. De esta manera se producen muchas ondas amortiguadas cada segundo, igual que un timbre que suena de forma aparentemente continua porque el martillo golpea la campana muchas veces por segundo.

El inconveniente de los transmisores a chispa es que son muy ineficientes y producen una cantidad prodigiosa de interferencias, así que se invirtió mucho esfuerzo en descubrir una buena manera de generar una "onda continua" que no pierda potencia con el tiempo, de manera que se pueda producir una sola onda que luego se puede encender y apagar según se necesite.

Con el tiempo se desarrollaron varios sistemas, como generadores eléctricos de alta frecuencia, osciladores electrónicos, etc. Según éstos se hicieron más comunes, los viejos transmisores a chispa y las ondas amortiguadas que producían fueron relegados y, finalmente, prohibidos mundialmente (así de problemáticas eran las interferencias que producían).

Y ese es el motivo por el que a una señal de radio en código Morse la llamamos Onda Continua a pesar de que se enciende y se apaga.

por jacobo el March 07, 2015 05:43 AM

Why a radio signal carrying Morse Code is called Continuous Wave even though it's turned on and off

(Originally published on Google+.)

As you may know, lately I'm into amateur radio. In this world, Morse code is still alive and well, though it is not necessary to learn it to get a license. When two operators use Morse code to communicate, quite often they use a mode called "Continuous Wave", or CW for short.

For quite a while I thought that CW was quite an odd name for a way to transmit Morse code. There's certainly a wave: that's the radio wave on which the Morse code is modulated. What I didn't see so clearly was the reason for the "continuous" adjective. After all, the wave is being turned on and off all the time: that's precisely how you can send Morse. If it's being turned on and off, it's not continuous. What's the deal?

Well, the deal is that before we had continuous waves, we already had Morse code on the radio, transmitted with a different kind of radio wave: the Damped Wave.

A Continuous Wave is a sinusoidal wave with a precise frequency. Nowadays it's very easy for us to produce precise and stable sinusoidal waves using pretty cheap electronics. However, in the early days of radio it wasn't so: there weren't good enough electronic oscillator circuits that could produce a quality continuous wave. So radio stations used a different mechanism to produce a different kind of radio waves.

This mechanism was the spark-gap transmitter. The general idea is that a high voltage across a gap produces an electric arc (a spark). The transmitter contains a circuit that, when an arc starts, produces a "ringing" oscillation, like the sound of a bell being struck once by a hammer. This oscillation is fed to an antenna to transmit it as a radio wave, which is called a "damped wave" because it loses amplitude with time, just like the sound of a bell stroke.

As the damped wave only lasts for a tiny fraction of a second, the spark gap is set up so that those sparks are extinguished almost as soon as they start, and a new one starts almost immediately, which produces another damped wave. In this way, lots of damped waves are produced and transmitted every second, like a school bell ringing seemingly continuously because its hammer strikes the bell several times per second.

The problem with spark-gap transmitters is that they are very inefficient and produce a prodigious amount of interference, so a lot of effort was spent in discovering a good way to generate a "continuous wave" that doesn't lose strength with time so you only need to produce the one wave and turn it on and off as needed.

Eventually, several systems were developed, like high-frequency electric generators, electronic oscillators, etc. As those became commonplace, the old spark-gap transmitters and the damped waves they produced were retired and then banned worldwide (so big was the interference problem).

And that's why a radio signal carrying Morse Code is called Continuous Wave even though it's turned on and off.

por jacobo el March 07, 2015 05:43 AM

January 20, 2015

Akademy 2015 terá lugar na Coruña

AkademyA Asociación GPUL colaborará coa Comunidade KDE para celebrar a Akademy 2015, reunión internacional de referencia da comunidade KDE. O evento terá lugar na Facultade de Informática da Coruña entre os días 25 e 31 de xullo.

Neste evento participarán centos de persoas da comunidade KDE, para planificar o futuro da comunidade e da súa tecnoloxía. Tamén haberá participantes procedentes de outros proxectos libres (entornos gráficos, distribucións, etc.), coma de empresas de relevancia no sector e xente interesada no software libre en xeral.

Mais información está dispoñible no anuncio publicado por KDE.

por fid_jose el January 20, 2015 01:10 AM

January 08, 2015

Frogr 0.11 released

Screenshot of Frogr 0.11

So, after neglecting my responsibilities with this project for way too long, I finally released frogr 0.11 now, making the most that I’m now enjoying some kind of “parenting vacation” for a few days.

Still, do not expect this new release to be fully loaded of new features and vast improvements, as it’s more like another incremental update that adds a couple of nice new things and fixes a bunch of problems I was really unhappy about (e.g. general slowness, crashes).

Wrapping it up, the main changes included with this release are:

As usual, feel free to check the website of the project in case you want to know more about frogrhow to get it or how to contribute to it. I’m having a hard time lately to find time to devote to this pet project, so any help anyone can provide will be more than welcome 🙂 fosdem-15-logo

By the way, I’m going to FOSDEM this year again, so feel free to say “hi” if you want to chat and/or share a beer (or more!).

por mario el January 08, 2015 02:09 AM

December 08, 2014

Campaña Tecnoloxía Libre de Conflito

Dende a ONG para a cooperación ao desenvolvemento ALBOAN lanzaron en Xuño unha campaña chamada "Tecnología Libre de Conflicto", coa que pretenden concienciar a ciudadanía sobre o conflito que máis mortes está a causar dende a Segunda Guerra Mundial, a guerra no Congo. A particularidade de esta campaña é que informan de que existe unha conexión entre dita guerra no Congo e o consumo de tecnoloxía, en concreto os teléfonos móbiles.

É unha campaña que está a ter moi boa acollida xa que plantexan diferentes maneiras de colaborar, entre elas a reciclaxe do móbil, co que evitamos que ditos aparellos rematen en vertedeiros tecnolóxicos en paises coma no Congo.

Nestes momentos están a lanzar unha acción de captación de firmas que enviarán aos nosos representantes no Parlamento Europeo pedíndolles unha regulación europea que evite que o comercio internacional de mineráis alimente este tipo de conflictos armados. Por iso precisan difundir esta campaña para mobilizar a acción ciudadana a FIRMAR por una tecnoloxía libre de conflito (a través de Change.org) no link www.tecnologialibredeconflicto.org/firma

 Se queredes obter máis información, adxuntamos unha pequena explicación que dende a ONG nos remiten así como a web da campaña www.tecnologialibredeconflicto.org onde poderedes atopar fotos, videos e outros materiales de interese.


por castrinho8 el December 08, 2014 04:20 PM

December 05, 2014

Convocatoria Asamblea Ordinaria 19/12/2014

Pola presente, convócase Asamblea Ordinaria de GPUL para o venres 19 de decembro de 2014 na Aula 2.0a da Facultade de Informática.

Primeira convocatoria: 19:30
Segunda convocatoria: 20:00

Orde do día:

- Lectura e aprobación, se procede, da Acta da Asemblea anterior.
- Lectura de altas e baixas de socios desde a última Asemblea.
- Lectura e aprobación, se procede, das Contas de 2013.
- Estado das Contas de 2014.
- Lectura e aprobación, se procede, da Memoria de Actividades de 2014.
- Discusión e aprobación, se procede, dunha política de difusión para os eventos de GPUL.
- Discusión e aprobación, se procede, de actividades a levar a cabo no ano 2015 en concreto unha en conmemoración do 16 Aniversario de GPUL.
- Discusión e aprobación, se procede, sobre facer a GPUL socia de  Fiare Galicia.
- Rogos e preguntas. 
Os socios poderán propoñer novos puntos na Orde do día. Tal e como se especifica no capítulo II artigo 19 dos Estatutos a convocatoria (incluíndo a Orde do día pechada) debe ser comunicada cun mínimo de 15 días de antelación. Polo que calquera modificación que se queira facer á Orde do día deberá ser proposta coa suficiente antelación.


En caso de non poder celebrarse na Aula de Graos comunicarase unha aula alternativa con tempo suficiente.

Adxúntase copia da Acta da última Asamblea para a súa revisión por parte dos socios e futuros asistentes.

Asdo.

Marcos Chavarría,
Secretario do GPUL.

por gpul el December 05, 2014 05:42 PM

December 03, 2014

Xoves Libres. ¿Queres participar?

Volven de novo a GPUL os Xoves Libres, un evento periódico de Software, Hardware e Cultura Libre en xeral, que ten lugar ao longo de varias semanas durante o 2º cuatrimestre universitario na Facultade de Informática da UDC.

Realizaranse todo tipo de actividades, dende charlas ata obradoiros prácticos relacionados con diversos ámbitos co fin de mostrar as vantaxes do software libre e facer ver que se atopa tremendamente presente no día a día de usuarios e desenvolvedores.  

Se tes algún proxecto libre que che gustaría contar a xente, coñeces unha tecnoloxía molona e che apetece compartir os teus coñecementos ou traballas nunha empresa que desenvolve/utiliza software libre e queres fomentar o seu uso, non dubides en apuntarte no noso Call For Abstracts.

Os horarios e as datas son totalmente flexibles, a idea é comezar sobre as primeiras semanas de Febreiro de 2015 e realizar unha sesión semanal entre o mércores ou o xoves, pero por suposto adaptámonos as datas e as horas das que os poñentes dispoñan.

No seguinte enlace poderás enviar a actividade que nos queiras propoñer:

https://www.gpul.org/indico/conferenceDisplay.py?ovw=True&confId=26

Para calquera dúbida podes escribirnos a info@gpul.org ou ao noso twitter @gpul_, agradecemosvos enormemente a vosa colaboración, tanto se vos decidides a mandar unha actividade como se simplemente que nos axudades na difusión de esta petición de contribucións ;)

por gpul el December 03, 2014 10:55 PM

November 27, 2014

ESF organiza un obradoiro de instalación de GNU/Linux

Dende GPUL queremos recomendarvos a tódolos que aínda non tedes algunha distribución de GNU/Linux no voso portatil que vos pasedes polo un obradoiro de instalación de un sistema operativo GNU/Linux que realizan os compañeiros de Enxeñería Sen Fronteiras.

O evento terá lugar na Domus de A Coruña o próximo martes día 2 e extenderase dende as 16:00 ata as 19:00 aproximadamente, en función do tempo que sexa preciso para levar a cabo as instalacións.

O obradoiro é totalmente de balde, o que si solicitan dende a organización e solicitar a inscrición a través do correo electrónico: info@galicia.isf.es

Para máis información podedes consultar o seguinte enlace:

AdjuntoTamaño
cartaz_obradoiro.jpg124.18 KB

por gpul el November 27, 2014 08:45 PM

November 17, 2014

Cómo funcionan las tarjetas de banda magnética

Durante la última semana he estado haciendo experimentos con un lector de tarjetas de banda magnética. En resumen, me he comprado el lector de tarjetas más barato que había en Amazon, lo abrí, le soldé unos cuantos cables, agregué unos cuantos componentes electrónicos, lo enchufé al conector de micrófono de mi ordenador y grabé cosas como esta: la señal en bruto procedente de una banda magnética. En este artículo voy a explicar cómo funcionan las tarjetas de banda magnética y cómo descodificarlas.

Las tarjetas de banda magnética fueron inventadas por Forrest Parry en 1969, que fue un año bastante prolífico en cuestión de pasos de gigante para la humanidad. La primera empresa en desarrollar y producir estas tarjetas fue IBM, que dejó las ideas básicas "abiertas" para que el resto de la industria pudiese desarrollar sus propios sistemas de tarjetas. Al poco tiempo, las industrias bancarias y aeronáuticas se reunieron y definieron un conjunto de estándares, de manera que todas las tarjetas magnéticas tienen el mismo tamaño, tienen la banda magnética en la misma posición, utilizan las mismas codificaciones, etc., etc.

La banda magnética es la tira, generalmente de color oscuro, que aparece en la parte de atrás de la tarjeta. En la banda magnética están grabados los datos de la tarjeta, pero para leer sobre cómo están grabados estos datos, antes tenéis que pulsar el enlace de "leer más".

read more

por jacobo el November 17, 2014 01:54 AM

November 09, 2014

Mystery signal challenge

I'm currently doing some experiments with electronics, and in the process I captured the signal you can find in the attached file. I captured it through my sound card's microphone input, and I've amplified it in software so it's easier to "appreciate". (Update: I've managed to perform a way better capture, so this is as it comes straight from the sound card, with no extra amplification.) Obviously I know what it is, but I'd like to know who among the people who happen to read this will also be able to identify it — and better yet, tell me what's in the signal.

One clue: this signal is produced by something that was invented in 1969.

Post your guesses and comments in the story on Google+.

por jacobo el November 09, 2014 08:09 PM

November 01, 2014

Cómo demodular radio FM estéreo

En el artículo anterior hablé de cómo demodular una señal de radio AM o FM, y en este artículo voy a hablar de lo que os encontraréis después de la demodulación (lo que di en llamar "el programa"). Tal vez os sorprenda que vaya a dedicar un artículo completo al asunto, pero, como podéis imaginar por su longitud, puede tener bastante tela. Al menos, para nuestro alivio, el asunto es muy sencillo en AM y en FM "mono": después de demodular la señal de radio, lo que tenemos es una onda sonora. Sin embargo, la cosa se complica cuando se trata de FM estéreo.

En un sistema de sonido monofónico sólo hay un altavoz, o hay varios altavoces pero todos reproducen la misma señal sonora. En un sistema de sonido estereofónico, en cambio, hay dos juegos de altavoces; uno que reproduce sonidos destinados al oído derecho y otros que reproducen sonidos destinados al oído izquierdo. Esto permite realizar efectos como hacer aparentar que un sonido procede de una determinada dirección (el sonido estereofónico se inventó para el cine en los años 30; la palabra "estéreo" viene de la palabra griega στερεός, que significa "sólido").

Cuando quisieron inventar un sistema para transmitir sonido estéreo por la radio, decidieron añadir esta capacidad a la radio FM. El objetivo era que una emisora FM pudiese transmitir sonido estéreo por el mismo canal que venía usando para el sonido mono de manera que las radios monofónicas que ya estaban en el mercado pudiesen recibir correctamente esas transmisiones estéreo, aunque (por supuesto) se escuchasen en mono. Para ello, las frecuencias audibles del programa demodulado deben contener una señal monofónica de manera que una radio FM mono pueda tratar el programa estéreo como si fuera un programa mono y que se oiga igual la música o las noticias o lo que sea. Sin embargo, no había nada que impidiese añadir más información en frecuencias superiores a estas frecuencias audibles. Lo que hicieron fue precisamente eso: generar una onda con toda la información necesaria para reconstruir la señal estereofónica y desplazarla en frecuencia hasta una frecuencia inaudible, y luego hacer que el receptor la vuelva a trasladar hasta las frecuencias audibles.

Tengo dibujitos y diagramas en el artículo completo, que podéis leer pulsando en "leer más". O podéis no pulsarlo y quedaros con la duda para siempre. Vosotros mismos.

read more

por jacobo el November 01, 2014 11:12 PM

October 26, 2014

Modulación en amplitud y modulación en frecuencia

Imaginad que tenemos un programa que queremos emitir por la radio. Este programa podría ser una canción, o un partido de fútbol, o un noticiero, o lo que sea. El programa puede ser en directo, o puede estar grabado en una cinta o en un CD o en un MP3. En todo caso, el programa se compone de ondas sonoras que queremos convertir en ondas de radio para que la gente pueda recibirlas y escuchar el programa en sus casas.

Obviamente, esto no es tan sencillo como enchufar directamente el micrófono o el reproductor de MP3 a la antena emisora. Si esto pudiese funcionar, sólo podría haber una emisora de radio en cada sitio. Además, las ondas de radio no se transmiten nada bien a las frecuencias de las ondas sonoras; para una buena transmisión, las ondas de radio necesitan tener frecuencias de cientos de miles o millones de ciclos por segundo. Por lo tanto, es necesario un proceso para convertir las ondas sonoras de baja frecuencia en ondas de radio de alta frecuencia; este proceso se llama modulación.

Pulsad en "leer más" para leer más. No pulséis en "leer menos" porque no lo hay.

read more

por jacobo el October 26, 2014 06:05 PM

October 22, 2014

Cómo funciona un descodificador de TDT

La TDT (Televisión Digital Terrestre) es un sistema de transmisión digital de televisión utilizando señales de radio transmitidas cerca de la superficie de la Tierra (en lugar de usar cable o satélites, que usan sistemas diferentes). En el sistema de televisión tradicional (analógico), las señales de radio representaban directamente las imágenes y sonidos transmitidos, mientras que en el sistema digital las señales de radio representan dígitos binarios que componen un "stream" digital de vídeo y audio. No os preocupéis por todas estas palabras raras, que si pulsáis en "leer más" os lo explico todo.

read more

por jacobo el October 22, 2014 03:39 AM

October 20, 2014

Hacklab Impresion 3D

Dende GPUL prácenos presentar o HackLab de impresión 3D, que arrancará
o xoves 23 de outubro ás 16:30 na Aula 2.0b da Facultade de Informática
da Coruña.

Xa dende hai tempo se está a ver unha gran evolución no mundo do
hardware libre, cunha maior presencia na sociedade. Froito desta
evolución son as impresoras 3d e o proxecto Clone Wars que
documentan como crear a túa propia impresora 3D.


GPUL, non quere quedar atrás neste mundo e inicia a súa andaina con
este HackLab que será en múltiplas sesións e como obxectivo rematar
montando unha impresora 3D para o uso dos GPULeiros e GPULeiras, aínda
que está aberto a que os asistentes tamén poidan montar a súa propia
impresora.

A idea do HackLab é a de crear un grupo de traballo cooperativo de
carácter aberto e fomentando a aprendizaxe autónoma e cooperativa de
tódolos membros.

Dende GPUL agradecemos que se estiveras interesado neste HackLab te
subscribises no formulario do evento tan pronto como puideras,
para poder permitirnos organizar mellor o evento.

Cartel do HackLab

 

por braisarias el October 20, 2014 03:12 PM