vendredi 30 décembre 2016

The Trump Tweet Twist

Click to enlarge
Lots of fun guaranteed for 2017!

jeudi 29 décembre 2016

Retrospective 2016

  • Ada PDF Writer. First release in 2016. Link here.
  • AZip 2.0. First release with standard Deflate compression format and a recompression tool for squeezing Zip archives smaller, with an automatic algorithm picking depending on data types. Link to AZip here.
  • Excel Writer. Internationalization and zoom factor added. Link here.
  • GID (Generic Image Decoder). Maintenance release. Added a chart reverse-engineering tool. Link to GID here.
  • GLOBE_3D. Added multi-texturing (specular textures added for shiny effects). Uses GID for decoding textures (broad choice of image formats). Added Wavefront object format importer. Link to GLOBE_3D here.
  • Mathpaqs. Maintenance release. The Copula package can use a vector of pseudo-random generators instead of a single generator. Link to Mathpaqs here.
  • Zip-Ada (#1). Added a "true" - and original - Deflate algorithm which scans the LZ-compressed data stream for setting up compression blocks, but without losing too much time doing so (Taillaule algorithm). Link to Zip-Ada here.
  • Zip-Ada (#2). Added an original LZMA algorithm which uses floating-point calculations for probability estimates of encoding variants. On some data formats (such as raw camera image data or mobile device data) this algorithm outpaces all existing LZMA-based compressions on the SqueezeChart benchmark! Link to Zip-Ada here.
Oh, perhaps it's worthwhile to remind it: all of this software is fully in Ada.
With the exception of AZip, it builds "out-of-the box" on at least two independent Ada toolsets: AdaCore GNAT, and PTC ObjectAda.

dimanche 27 novembre 2016

L'effet Trump

Avec le passage du temps et l'expérience, il y a forcément de moins en moins souvent des événements surprenants pour nous dans le monde merveilleux de la finance. Nous rappelons au passage que la notion de «cygne noir» dépend du niveau d’information et de la mémoire du sujet, et bien sûr de sa capacité de raisonnement (je pense à la jolie expression «connect the dots»).
La surprise est donc ici l’effet, net et immédiat, de l’élection de Donald Trump sur les taux d’intérêts autour du globe. Ses promesses de déficits supplémentaires ont fait bondir les taux des obligations du trésor US, sur toutes les durées (de 2 à 30 ans). Du côté des durées courtes, les marchés semblent penser que la Fed aura désormais moins de scrupules à relever ses taux.
Ce qui est surprenant (pour nous) est la rapidité de la propagation de ce bond (du trésor 😏 ) sur les autres marchés, dans d’autres monnaies. Par exemple, voici les taux hypothécaires d’une banque régionale suisse :




Étonnant, non ?
A voir si «l'effet Trump» est le début de quelque chose de durable et marque un véritable tournant.
Nous y reviendrons dans un prochain billet.

dimanche 13 novembre 2016

GID release #06 - with Recurve, a chart data recovery tool

GID means Generic Image Decoder, a free, open-source library that can be found here.

The latest release features a couple of new application examples, among them a tool called Recurve for retrieving data from an image with plotted curves. Typically you come across a chart on a web site and would like to get the corresponding data, for reworking them in Excel - perhaps you want to spot specific values, or compare two curves that were not originally on the same chart, or use the data for further calculation. Sometimes the data is not available from the web site - and even less if the chart is from a PDF or a scanned newspaper page.

Fortunately, Recurve will do the painful job of retrieving the data points for you. It will detect gridlines and filter them out, then track the curves by matching their respective colours.

An example here:
Mortgage rates in Switzerland - 2004 to now. Chart is from the Comparis web site

mardi 8 novembre 2016

Zip-Ada v.52 - featuring LZMA compression

In case you missed it, there is a new version of Zip-Ada @ http://unzip-ada.sf.net .

Actually, these are two successive versions: v.51 with a basic LZMA encoder, v.52 with a more advanced one.
The shift from v.50 to v.51 ensured 52 steps up in the Squeeze Chart benchmark, although the LZ part remained identical and the new "MA" part is a simple, straightforward encoder which comes in replacement of our sophisticated Taillaule algorithm for the Deflate format. This shows just how much the LZMA format is superior to Deflate.
Then, from v.51 to v.52, there were 45 more steps upward. This is due to a combination of a better-suited LZ algorithm, and a refinement of the "MA" algorithm - details below.

* Changes in '51', 27-Aug-2016:
  - LZMA.Encoding has been added; it is a standalone compressor,
      see lzma_enc.adb for an example of use.
  - Zip.Compress provides now LZMA_1, LZMA_2 methods. In other words, you
      can use the LZMA compression with Zip.Create.
  - Zip.Compress has also a "Preselection" method that selects
      a compression method depending on hints like the uncompressed size.
  - Zip.Compress.Deflate: Deflate_1 .. Deflate_3 compression is
      slightly better.

The LZMA format, new in Zip-Ada on the encoding side, is especially good for compressing database data - be it in binary or text forms. Don't be surprised if the resulting archive represent only a few percents of the original data...
The new piece of code, LZMA.Encoding, has been written from scratch. This simple version, fully functional, holds in only 399 lines, after going through J-P. Rosen's Normalize tool.
It can be interesting for those who are curious about how the "MA" part of that compression algorithm is working.
The code can be browsed here.

* Changes in '52', 08-Oct-2016:
  - UnZip.Streams: all procedures have an additional (optional)
      Ignore_Directory parameter.
  - Zip.Compress has the following new methods with improved compression:
      LZMA_3, Preselection_1 (replaces Preselection), Preselection_2.
      Preselection methods use now entry name extension and size for
      improving compression, while remaining 1-pass methods.

For those interested about what's happening "under the hood", LZMA.Encoding now computes (with floating-point numbers, something unusual in compression code!) an estimation of the predicted probabilities of some alternative encodings, and chooses the most probable one - it gives an immediate better local compression. Sometimes the repetition of such a repeated short-run improvement has a long-run positive effect, but sometimes not - that's where it's beginning to be fun...

mardi 1 novembre 2016

AZip 2.0

The version 2.0 of AZip is out!

    URL: http://azip.sf.net/

AZip is a Zip archive manager.

The latest addition is an archive recompression tool.

AZip's recompression tool's results - click to enlarge
Some features: 

    - Multi-document (should be familiar to MS Office users)
    - Flat view / Tree view
    - Simple to use (at least I hope so ;-) )
    - Useful tools:
        - Text search function through an archive, without having to extract files
        - Archive updater
        - Integrity check
        - Archive recompression (new), using an algorithm-picking approach for improving a zip archive's compression.
    - Encryption
    - Methods supported: Reduce, Shrink, Implode, Deflate, Deflate64, BZip2, LZMA
    - Free, open-source
    - Portable (no installation needed, no DLL, no configuration file)

"Under the hood" features:

    - AZip is from A to Z in Ada :-)
    - Uses the highly portable Zip-Ada library
    - Portablity to various platforms: currently it's fully implemented with GWindows (for Windows), and there is a GtkAda draft, but anyway the key parts of the UI and user persistence are generic, platform-independent

Enjoy!

lundi 24 octobre 2016

La BNS double encore la mise !

Encore quelques données fraîches, cette fois de la Banque Nationale Suisse (BNS).

Le nombre de milliards de francs en circulation vient de passer une nouvelle puissance de deux, comme il l'a fait quatre fois en seize ans en ce beau début de millénaire:
  • 29 = 512 milliards maintenant, en 2016
  • 28 = 256 en 2012
  • 27 = 128 en 2010
  • 26 = 64 en 2009
Il n'y en avait qu'un peu plus de 32 milliards en 2000.

Les lecteurs de ce blog savent combien nous aimons les puissances de deux (cf nos activités dans le domaine de la compression de données). Il fallait donc fêter dignement ce passage de cap logarithmique avec un graphique mis à jour, que voici:

Cliquer pour agrandir 
Pour ceux que ce graphique, intuitivement, pourrait inquiéter: rassurez-vous!
La progression de la création monétaire de la BNS est beaucoup plus tempérée que, par exemple, celle de son homologue argentine:

Cliquer pour agrandir

Dormez tranquilles, braves gens!

mardi 4 octobre 2016

Economie septembre 2016

Suisse...
Taux hypothécaires CHF

Taux à risque minimal CHF

...Monde:
Avindex - "appétit pour le risque" financier

Baltic Dry Index

mardi 20 septembre 2016

LZMA parametrization

One fascinating property of the LZMA data compression format is that it is actually a family of formats with three numeric parameters that can be set:

  • The “Literal context bits” (lc) sets the number of bits of the previous literal (a byte) that will be used to index the probability model. With 0 the previous literal is ignored, with 8 you have a full 256 x 256 Markov chain matrix, with probability of getting literal j when the previous one was i.
  • The “Literal position” (lp) will take into account the position of each literal in the uncompressed data, modulo 2lp. For instance lp=1 will be better fitted for 16 bit data.
  • The pb parameter has the same role in a more general context where repetitions occur.

For instance when (lc, lp, pb) = (8, 0, 0) you have a simple Markov model similar to the one used by the old "Reduce" format for Zip archives. Of course the encoding of this Markov-compressed data is much smarter with LZMA than with "Reduce".
Additionally, you have a non-numeric parameter which is the choice of the LZ77 algorithm – the first stage of LZMA.

The stunning thing is how much the changes in these parameters lead to different compression quality. Let’s take a format difficult to compress as a binary data, losslessly: raw audio files (.wav), 16 bit PCM.
By running Zip-Ada's lzma_enc with the -b (benchmark) parameter, all combinations will be tried – in total, 900 different combinations of parameters! The combination leading to the smallest .lzma archive is with many .wav files (but not all) the following: (0, 1, 0) – list at bottom [1].
It means that the previous byte is useless for predicting the next one, and that the compression has an affinity with 16-bit alignment, which seems to make sense. The data seems pretty random, but the magic of LZMA manages to squeeze 15% off the raw data, without loss. The fortuitous repetitions are not helpful: the weakest LZ77 implementation gives the best result! Actually, pushing this logic further, I have implemented for this purpose a “0-level” LZ77 [2] that doesn’t do any LZ compression. It gives the best output for most raw sound data. Amazing, isn’t it? It seems that repetitions are so rare that they output a very large code through the range encoder, while weakening slightly and temporarily the probability of outputting a literal - see the probability evolution curves in the second article, “LZMA compression - a few charts”.
Graphically, the ordered compressed sizes look like this:



and the various parameters look like this:

The 900 parameter combinations

The best 100 combinations

Many thanks to Stephan Busch who is maintaining the only public data compression corpus, to my knowledge, with enough size and variety to be really meaningful for the “real life” usage of data compression. You find the benchmark @ http://www.squeezechart.com/ . Stephan is always keen to share his knowledge about compression methods.
Previous articles:
____
[1] Here is the directory in descending order (the original file is a2.wav).

37'960 a2.wav
37'739 w_844_l0.lzma
37'715 w_843_l0.lzma
37'702 w_842_l0.lzma
37'696 w_841_l0.lzma
37'693 w_840_l0.lzma
37'547 w_844_l2.lzma
...
32'733 w_020_l0.lzma
32'717 w_010_l1.lzma
32'717 w_010_l2.lzma
32'707 w_011_l1.lzma
32'707 w_011_l2.lzma
32'614 w_014_l0.lzma
32'590 w_013_l0.lzma
32'577 w_012_l0.lzma
32'570 w_011_l0.lzma
32'568 w_010_l0.lzma

[2] In the package LZMA.Encoding you find the very sophisticated "Level 0" algorithm

    if level = Level_0 then
      while More_bytes loop
        LZ77_emits_literal_byte(Read_byte);
      end loop;
    else
      My_LZ77;
    end if;

Hope you appreciate ;-)

samedi 10 septembre 2016

LZMA compression - a few charts

Here are a few plots that I have set up while exploring the LZMA compression format.

You can pick and choose various LZ77 variants - for LZMA as well as for other LZ77-based formats like Deflate. Of course this choice can be extended to the compression formats themselves. There are two ways of dealing with this choice.
  1. You compress your data with all variants and choose the smallest size - brute force, post-selection; this is what the ReZip recompression tool does
  2. You have a criterion for selecting a variant before the compression, and hope it will be good enough - this is what Zip.Compress, method Preselection does (and the ZipAda tool with -eps)
If the computing resource - time, even energy costs (think of massive backups) - is somewhat limited, you'll be happy with the 2nd way.
A criterion appearing obviously by playing with recompression is the uncompressed size (one of the things you know before trying to compress).


Obviously the BT4 (one of the LZ77 match finders in the LZMA SDK) variant is better on larger sizes than the IZ_10 (Info-Zip's match finder for their Deflate implementation), but is it always the case ? Difficult to say on this graphic. But, if you cumulate the differences, things begin to become interesting.


Funny, isn't it ? The criterion would be to choose IZ_10 for sizes smaller than the x-value where the green curve reaches its bottom, and BT4 for sizes larger than that x-value.

Another (hopefully) interesting chart is the way the probability model in LZMA (this time, it's the "MA" part explained last time) is adapted to new data. The increasing curves show the effect of a series of '0' on a certain probability value used for range encoding; the decreasing curves show the effect of a series of '1'. On the x-axis you have the number of steps.


jeudi 1 septembre 2016

Taux négatifs - toujours plus bas!

Taux d'intérêts CHF. Source: BNS. Cliquer pour agrandir.

Deux graduations de plus, d'un seul coup...

jeudi 18 août 2016

LZMA compression explained

This summer vacation's project was completed almost on schedule: write a LZMA encoder, whilst enjoying vacation - that is, work early in the morning and late in the evening when everybody else is sleeping; and have fun (bike, canoe, visiting caves and amazing dinosaurs fac-similes, enjoying special beers, ...) the rest of the day.

Well, "schedule" is a bit overstretched, because with a topic as tricky as data compression, it is difficult to tell when and even whether you will succeed...

LZMA is a compression format invented by Igor Pavlov, which combines a LZ77 compression and range encoding.

With LZ77, imagine you are copying a text, character by character, but want to take some shortcuts. You send either single characters, or a pair of numbers (distance, length) meaning "please copy 'length' characters, starting back 'distance' characters in the copied text, from the point where the cursor is right now". That's it!
LZ77 is a well covered subject and is the first stage of most compression algorithms. Basically you can pick and choose an implementation, depending on the final compression size.

Range encoding is a fascinating way of compressing a message of any nature. Say you want to send a very large number N, but with less digits. It's possible - if some of the digits (0 to 9), appear more frequently, and some, less. The method is the following.
You begin with a range, say [0, 999[.
You subdivide it in ten intervals, corresponding to the digits 0 to 9, and calibrated depending on their probability of occurrence, p0 .. p9. The first digit of N is perhaps 3, and its corresponding interval is, say, [295, 405[.
Then, you continue with the second digit by subdividing [295, 405[ in ten intervals. If the second digit is 0, you have perhaps now [295, 306[, representing the partial message "30". You see, of course, that if you want to stick with integers (with computers you don't have infinite precision anyway), you lose quickly precision when you set up the ten intervals with the probabilities p0 .. p9. The solution is to append from time to time a 0 to the interval, when the width is too small. So, if you decide to multiply everything by 10 each time the width is less than 100, then the interval for "30" will be now [2950, 3060[.
Some n digits to be encoded later (after n subdivisions and some x10 when needed) your interval will perhaps look like [298056312, 298056701[. The bounds become larger and larger - second problem. Solution: you see that the leftmost digits won't change anymore. You can get rid of them and send them as a chunk of the compressed message. The compression will be better when symbols are much more frequent than others: the closer the probability is to 1, the more the range width will be preserved. If the probability was exacly 1, the width wouldn't change at all and this trivial message with only the same symbol wouln't take any space in its compressed form! It is an absurd case, but it shows why compression methods such as LZMA are extremely good for very redundant data.
That's how the basic range encoding works.
Then, a funny thing is that you can encode a mix of different alphabets (say digits '0' to '9' and letters 'A' to 'Z') or even the same alphabet, but with different probabilities depending on the context, provided the decoder knows what to use when. That's all for range encoding (you find a more detailed description in the original article [1]).

LZMA's range encoder works exclusively on a single, binary alphabet (0's and 1's), so the range is always divided in two parts. But it works with lots of contextual probabilities. With some parameters you can have millions of different probabilities in the model! The probabilities are not known in advance, so in this respect LZMA is a purely adaptive compression method: the encoder and the decoder adapt the probabilities as the symbols are sent and received. After each bit encoded, sent, received, decoded, the entire probability set is (and has to be) exactly in the same state by the encoder and by the decoder.

Developing an encoder from scratch, even if you have open-source code to reproduce, is fun, but debugging it is a pain. A bug feels like when something doesn't work in a PhD work in maths. No way to get help from anybody or by browsing the Web. By nature, the compressed data will not contain any redundancy that would help you fixing bugs. The decoder is confused on faulty compressed data and cannot say why. For range encoding, it is worse: as in the example, digits sent have nothing to do with the message to be encoded. The interval subdivision, the shipping of the leading interval digits, and the appending of trailing '0', occur in a way which is completely asynchronous. So, the good tactic is, as elsewhere, to simplify and divide the issues to the simplest.
First, manage to encode an empty message (wow!). It seems trivial, but the range encoder works like a pipeline; you need to initialize it and flush it correctly. Then, an empty message and the end-of-stream marker. And so on.
Another source of help for LZMA is the probability set: it needs to be identical at every point as said before.

The results of this effort in a few numbers:
  • LZMA.Encoding, started July 28th, first working version August 16th (revision 457).
  • Less than 450 lines - including lots of comments and some debugging code to be removed!
  • 5 bugs had to be fixed.

To my (of course biased) opinion, this is the first LZMA encoder that a normal human can understand by reading the source code.

Zip-Ada's Zip.Compress makes use of LZMA encoding since revision 459.

The source code is available here (main SourceForge SVN repository) or here (GitHub mirror).

Back to vacation topic (which is what you do often when you're back from vacation): a tourist info sign was just perfect for a 32x32 pixels "info" icon for the AZip archive manager.

Click to enlarge
The beautiful sign

By the way, some other things are beautiful in this town (St-Ursanne at the Doubs river)...



____
[1] G. N. N. Martin, Range encoding: an algorithm for removing redundancy
   from a digitized message, Video & Data Recording Conference,
   Southampton, UK, July 24-27, 1979.

jeudi 7 juillet 2016

GLOBE_3D: now, a bit of fog...

Click to enlarge picture

Here is the code activating the fog in the background.

    if foggy then
      Enable (FOG);
      Fog (FOG_MODE, LINEAR);
      Fog (FOG_COLOR, fog_colour(0)'Unchecked_Access);
      Hint (FOG_HINT, FASTEST);
      Fog (FOG_START, 1.0);
      Fog (FOG_END, 0.4 * fairly_far);
    end if;

As usual with GL, it looks very obvious, but (as usual too) it is one of the few combinations that are actually working.

mercredi 6 juillet 2016

GLOBE_3D Release 2016-07-05 - "Blender edition"

GLOBE_3D is a GL Object Based 3D engine realized with the Ada programming language.
URL: http://globe3d.sf.net

Latest additions:
  • Use of Generic Image Decoder (GID) in GL.IO; now most image formats are supported for textures and other bitmaps to be used with GLOBE_3D (or any GL app)
  • New Wavefront format (.obj / .mtl) importer
  • Doom 3 / Quake 4 map importer more complete
  • Unified GNAT project file (.gpr), allowing to selected the target Operating System (Windows, Linux, Mac) and compilation mode (fast, debug, small) for demos, tools, etc.
  • Project file for ObjectAda 9.1+ updated
The first two points facilitate the import of 3D models from software such as Blender.
Here is an example:
Click to enlarge
Coincidentally, the Wavefront file format so simple that you can also write 3D models "by hand" in that format. An example made in an Excel sheet is provided along with the importer, in the ./tools/wavefront directory.

Click to enlarge
Enjoy!

lundi 4 juillet 2016

Touché, coulé

Nouveauté de juin 2016: tous les taux dits "sans risque" jusqu'à 30 ans sont négatifs.
Actuellement, le seul risque qu'on ne court pas avec ces obligations est de s'enrichir...

Cliquer pour agrandir

dimanche 3 juillet 2016

GLOBE_3D: non-convex objects with transparency

It's stunning how the inventors of GL addressed from the beginning, in 1991, subtle issues popping up when displaying 3D object in your own program 25 years later.
For instance, take this model:

No alpha test. Click to enlarge.


It is a cross shaped (considered from above) object; texture has lots of transparency.
In the red rectangle you see the issue: the face in front was displayed before the face behind.
There is no bullet-proof rule for sorting faces, and GL has a per-screen-pixel depth buffer that allows displaying faces in an arbitrary order. So we don't want to introduce imperfect face sorting just for dealing with this kind of object.
Fortunately, the GL geniuses have invented a solution for that issue too:
    Enable    (ALPHA_TEST);
    AlphaFunc (GREATER, 0.05);
Et voilà...
Alpha test. Click to enlarge.
The model, "herbe01.obj" is in the ./tools/wavefront directory in the GLOBE_3D repository.

GLOBE_3D is a GL Object Based 3D engine realized with the Ada programming language.

URL: http://globe3d.sf.net/

mercredi 22 juin 2016

GLOBE_3D: most image formats now available for textures

The texture loader in GL.IO was around 15 years old and supported only the Targa (.tga) format for textures, plus a few sub-formats of Windows bitmaps (.bmp).
In order to make things easy when dealing with various models, e.g. those imported from Blender, the old code for reading images has been wiped out and the loader is using now GID for the job, supporting JPEG or PNG in addition. For instance the Blender model below is using the JPEG format for textures.

Futuristic Combat Jet (hi poly version) by Dennis Haupt (DennisH2010)

The following Blender model has a single PNG texture projected on a complicated surface called a Mandelbulb (never heard of before!) :

Mandelbulb 3D Panorama 3 by DennisH2010


GLOBE_3D is a GL Object Based 3D engine realized with the Ada programming language.

URL: http://globe3d.sf.net/

mardi 21 juin 2016

Wavefront importer for GLOBE_3D

Basically, it is possible now to import a model saved in Blender as a Wavefront (.obj) model, and turn it into a GLOBE_3D object:

Futuristic Combat Jet by Dennis Haupt (DennisH2010)

GLOBE_3D is a GL Object Based 3D engine realized with the Ada programming language.

URL: http://globe3d.sf.net/

vendredi 10 juin 2016

Multitexturing with GLOBE_3D - Doom 3 scene

Here, the effect of adding specular maps to the polygons' textures.

Before


After

The effect is better seen in motion. Click here for a video capture.

GLOBE_3D is a GL Object Based 3D engine realized with the Ada programming language.
URL: http://globe3d.sf.net/

mercredi 8 juin 2016

Visual debugging with GLOBE_3D

Texture names
Portal labels
More to come soon...

GLOBE_3D is a GL Object Based 3D engine realized with the Ada programming language.
URL: http://globe3d.sf.net/

mardi 7 juin 2016

Multitexturing with GLOBE_3D

First success with multitexturing (*) and the GLOBE_3D engine.
Commit on SourceForge: click here.



More to come soon...

GLOBE_3D is a GL Object Based 3D engine realized with the Ada programming language.
URL: http://globe3d.sf.net/

__
(*) several images are painted on the same polygon, but with different light reflection properties.

lundi 30 mai 2016

A HD video capture from GLOBE_3D

No news on this project - except my hardware for testing it is "new" (2012) and opens possibilities for nicer and smoother video captures.
Enjoy!


On my to-do list (since many years): add multi-texture rendering (specular reflection in addition to diffuse reflection), to make GLOBE_3D look a little bit more like a 21th-century 3D engine...

mardi 24 mai 2016

Des Pesos et des Francs

L'abandon par la BNS (Banque Nationale Suisse) du taux plancher (1.20 CHF pour 1 EUR) en janvier 2015 continue à faire des vagues. Des dizaines de milliers d'emplois ont disparu en Suisse entre-temps. C'est bien la preuve que c'est la faute de M. Jordan (président de la BNS), non ? C'est la voix des médias suisses qui le dit, et bien sûr qu'il y a une bonne part de vérité dans cette supposée causalité intra-helvétique.
Néanmoins, il convient de nuancer:
  • Le CHF s'est, depuis le choc et la quasi parité, affaibli à 1.11 CHF pour 1 EUR.
    Cliquer pour agrandir
  • Sur le marché international, on oscille autour de la parité avec le dollar (USD), donc le niveau d'avant l'abandon du taux plancher, et à l'époque le CHF était déjà dans une tendance baissière par rapport à l'USD.
    Cliquer pour agrandir
  • Entre-temps, il y a eu l'émergence d'une crise en Chine (gros acheteur de montres et bijoux) et dans le commerce international (avec certains indices au plus bas de 30, voire 40 ans): ça, ce n'est vraiment pas la faute de M. Jordan!
  • La BNS continue ses interventions pour affaiblir le CHF.

D'autres voix (très minoritaires) trouvent que la BNS en fait trop et va finir par ruiner le franc suisse.
D'ailleurs, on peut lire parfois dans le même article qu'elle en fait trop (interventions) et en même temps pas assez (soutien à l'économie d'exportation).

Ces voix sont minoritaires parce que c'est plus vendeur de parler de phénomènes immédiats et que la taille du bilan de la BNS semble quelque chose de trop abstrait pour le grand public. En plus, c'est difficile pour tout un chacun (même pour des gens qui travaillent dans le secteur financier), et donc aussi pour les lecteurs de la presse helvétique, d'associer les centaines de milliards créés ex nihilo par la BNS avec l'argent de la vie quotidienne ou du fonds de retraite. J'entends souvent "oui, mais ce n'est pas de l'argent réel", ou "je m'en fiche, ce n'est pas mon argent". Pourtant, c'est bien du même argent - dans son ensemble - qu'il s'agit...

On voit régulièrement des comparaisons entre les bilans des différentes banques centrales de pays industrialisés depuis la crise de 2008 - ceci comprend les crises de la zone Euro qui ont suivi. La BNS a évidemment dû intervenir plus fortement (relativement au PIB) que les autres pour compenser l'effet de valeur-refuge du franc. Mais à quoi comparer ces valeurs gigantesques pour se faire une meilleure idée ? Je me suis dit: prenons un cas connu et contemporain d'impression monétaire "vigoureuse": l'Argentine. Puis comparons... J'ai eu là deux surprises:

  1. Une bonne surprise: on peut mettre les quantités de pesos argentins (ARS) et de francs suisses (CHF) sur la même échelle sans devoir appliquer un facteur. Ce coup de chance est très bon pour la lisibilité du graphique.
  2. Une surprise... disons... très légèrement désagréable: la base monétaire des deux monnaies suit à peu près la même trajectoire, avec un décuplement en dix ans.

Cliquer pour agrandir

Evidemment, on me dira que cela n'a rien à voir. L'Argentine frôle régulièrement la faillite, l'inflation y est galopante, et les pesos imprimés servent à payer les salaires des fonctionnaires argentins.
Au contraire, les finances de la Suisse sont saines, voire au-dessus de tout soupçon, et la création monétaire sert à empêcher le franc suisse de se renforcer. En l'occurrence, la banque centrale helvétique paye les salaires des fonctionnaires français ou allemands - en plus de financer les actionnaires d'Apple, les spéculateurs du gaz de schiste ou des biotechs... Mais qu'importe: en principe, la BNS devrait pouvoir racheter sans peine les 450 milliards de francs surnuméraires et les éliminer.

Alors, pourquoi s'inquiéter ?

dimanche 3 avril 2016

Zip-Ada v.50

There is a new version of Zip-Ada @ http://unzip-ada.sf.net .

*** 

In a nutshell, there are now, finally, fast *and* efficient compression methods available.

* Changes in '50', 31-Mar-2016:
  - Zip.Compress.Shrink is slightly faster
  - Zip.Compress.Deflate has new compression features:
     - Deflate_Fixed is much faster, with slightly better compression
     - Deflate_1 was added: strength similar to zlib, level 6
     - Deflate_2 was added: strength similar to zlib, level 9
     - Deflate_3 was added: strength similar to 7-Zip, method=deflate, level 5

I use the term "similar" because the compression strength depends on the algorithms used and on the data, so it may differ from case to case. In the following charts, we have a comparison on the two most known benchmark data set ("corpora"), where the similarity with zlib (=info-zip, prefix iz_ below) holds, but not at all with 7-Zip-with-Deflate.
In blue, you see non-Deflate formats (BZip2 and LZMA), just to remind that the world doesn't stop with Deflate, although it's the topic in this article.
In green, you have Zip archives made by Zip-Ada.

Click to enlarge image
Click to enlarge image

Here is the biggest surprise I've had by testing randomly chosen data: a 162MB sparse integer matrix (among a bunch of results for a Kaggle challenge) which is a very redundant data. First, 7-Zip in Deflate mode gives a comparatively poor compression ratio - don't worry for 7-Zip, the LZMA mode, genuine to 7-Zip, is second best in the list. The most surprising aspect is that the Shrink format (LZW algorithm) has a compressed size only 5.6% larger than the best Deflate (here, KZip).

Click to enlarge image

Typically the penalty for LZW (used for GIF images) is from 25% to 100% compared to the best Deflate (used for PNG images). Of course, at the other end of redundancy spectrum, data which are closer to random are also more difficult to compress and the differences between LZW and Deflate narrow forcefully.

About Deflate

As you perhaps know, the Deflate format, invented around 1989 by the late Phil Katz for his PKZip program, performs compression in two steps by combining a LZ77 algorithm with Huffman encoding.
In this edition of Zip-Ada, two known algorithms (one for LZ77, one for finding an appropriate Huffman encoding based on an alphabet's statistics) are combined probably for the first time within the same software.
Additionally, the determination of compressed blocks' boundaries is done by an original algorithm (the Taillaule algorithm) based on similarities between Huffman code sets.

*** 

Zip-Ada is a library for dealing with the Zip compressed archive
file format. It supplies:

 - compression with the following sub-formats ("methods"):
     Store, Reduce, Shrink (LZW) and Deflate
 - decompression for the following sub-formats ("methods"):
     Store, Reduce, Shrink (LZW), Implode, Deflate, BZip2 and LZMA
 - encryption and decryption (portable Zip 2.0 encryption scheme)
 - unconditional portability - within limits of compiler's provided
     integer types and target architecture capacity
 - input (archive to decompress or data to compress) can be any data stream
 - output (archive to build or data to extract) can be any data stream
 - types Zip_info and Zip_Create_info to handle archives quickly and easily
 - cross format compatibility with the most various tools and file formats
     based on the Zip format: 7-zip, Info-Zip's Zip, WinZip, PKZip, Java's
     JARs, OpenDocument files, MS Office 2007+, Nokia themes, and many others
 - task safety: this library can be used ad libitum in parallel processing
 - endian-neutral I/O

Enjoy!

mardi 23 février 2016

Hoarding cash

Just came across this Zerohedge article: "Safes Sell Out In Japan, 1,000 Franc Note Demand Soars As NIRP Triggers Cash Hoarding".

Hoarding cash may sound a like a good idea currently, with low price inflation and negative interest rates.
However, it is a big mistake to put Swiss Francs as cash into a safe. You may forget about it and have a bad surprise when you want to use or convert your nice pieces of paper in the future.
Here is the explanation in four charts.

1) Base money and 1,000 CHF banknotes (y-scale limited to 80Bn - click to enlarge chart)

The amount of cash hoarded as 1,000 CHF notes (CHF 71 Billion) is now much larger than the entire monetary base as in summer 2008 (CHF 45 Billion). First "Uh-oh". In case you are thinking "So, what ?", ask yourself what might be the value of this additional cash. The answer is: nothing. Again: this cash is backed by nothing. The market and especially the people hoarding cash into safes are not (yet) aware of that, fortunately for them (for now).
Now, what is the entire chart of base money (in the chart above, the y-scale is limited to 80 Billion) ?

2) CHF Base money (log scale - click to enlarge chart)
Second "Uh-oh". Again, this new money is backed by nothing. It is sold against foreign currencies by the Swiss National Bank to Mr Market who is thinking "Wow, the Swiss Franc, it must be rock-solid! The mountains, the gold, the army - what can be wrong ?"
With the foreign currencies, the SNB is buying bonds and equities in EUR or USD with more or less success.
Oh, a detail. The y-scale of chart 2 is logarithmic: there is a doubling at the crossing of each horizontal line.
With a normal scale, it looks like this:
3) CHF Base money (lin scale - click to enlarge chart)
Now a last one, if you did not have enough charts:

4) CHF and ARS Base money (pre mid-2014: one data point per year for ARS - click to enlarge chart)
Happy hoarding!

Bitcoin (and Swiss Franc) manias

Currently there is a crypto-mania (mostly around Bitcoin), no doubt about it. But let's put things into perspective. The followin...