Goiardi

The one-eyed fanged yak is becoming on you.

Goiardi Version 0.6.0 - Order of the Elephant

The Order of the Elephant is a Danish order awarded mostly to foreign heads of state and royals. Its elephant-themed insignia is appropriate for this latest release of goiardi, version 0.6.0.

Along with some bug fixes and smaller improvements, this release introduces Postgres as a supported database backend. The schema is not compatible with erchef’s schema, but is close to the goiardi MySQL schema (with some differences of course). This is the first project I’ve done with Postgres, so comments on how goiardi is using it are welcome.

Full CHANGELOG:

  • Postgres support.
  • Fix rebuilding indexes with an SQL backend.
  • Fix a bug where in MySQL mode events were being logged twice.
  • Fix an annoying chef-pedant error with data bags.
  • Event logging methods that are not allowed now return Method Not Allowed rather than Bad Request.
  • Switch the logger to a fork that can be built and used with Windows that excludes syslog when building on Windows.
  • Add basic syslog support.
  • Authentication protocol version 1.2 now supported.
  • Add a ‘status’ param to reporting, so a list of reports return by ‘knife runs’ can be narrowed by the status of the chef run (started, success, and failure).
  • Fix an action at a distance problem with in-memory mode objects. If this behavior is still desirable (it seems to be slightly faster than the new way), it can be turned back on with the —use-unsafe-mem-store flag. This change DEFINITELY breaks in-mem data file compatibility. If upgrading, export your data, upgrade goiardi, and reload your data.
  • Add several new searchable parameters for logged events.
  • Add organization_id to all MySQL tables that might need it someday. Orgs are not used at all, so only the default value of 1 currently makes it to the database.
  • Finally ran ‘go fmt’ on goiardi. It didn’t even mess up the long comment blocks, which was what I was afraid it would do. I also ran golint against goiardi and took its recommendations where it made sense, which was most areas except for some involving generated parser code, comments on GobEncode/Decode, commenting a bunch of identical functions on an interface in search, and a couple of cases involving make and slices. All in all, though, the reformatting, linting, and light refactoring has done it good.

To reiterate, this update breaks saved data file compatibility. You’ll need to export and re-import your data if you’re using the persistent in-memory data store.

A selection of precompiled binaries are provided on the release page, including Windows and Solaris builds. There are also knife plugins for goiardi’s reporting and event logging capabilities at knife-goiardi-reporting and knife-goiardi-event-log. The reporting plugin was forked from the official Chef knife-reporting plugin to add support for the “status” parameter. Both plugins are also available on rubygems.

Finally, after someone asked how goiardi would handle running ~500 nodes, I decided to take a look and see. I whipped up a little test script to create a thousand nodes and clients to see how goiardi (and more importantly the search) would handle it. I wasn’t particularly worried about the non-search portions of goiardi (1000 rows on a table isn’t very much at all, after all), but I hadn’t tested search extensively with large amounts of data. After creating those 1,000 nodes and clients, goiardi was using about 1.40 GB of RAM. An earlier test with 5,500 nodes (but no clients) used 6.45GB RAM. The numbers show that there’s definitely room for improvement with the search, in that the capability for indexing nodes and data bags is limited by RAM, but performance was good even at 5,500 nodes. More rigorous testing is needed, however. Search is an area I intend to revisit shortly to improve these problems.

The next issues to work on with goiardi are currently improving search to work even better with large amounts of data and use raft (or something similar) to allow multiple goiardi instances to keep their search indexes consistent, reworking the permissions system, being able to upload files to s3 or similar storage providers, and the serf based pushy type thing.