SK1 Print Design is an interesting project. They found the vector graphics program Sketch was useful to their business, and maintained their own customized version, eventually becoming a project all of their own. I'm not involved with SK1 Print Design myself but I do follow their newsfeed on Facebook, where they regularly post information about their work.
They have added import and export support for a variety of Colour Palettes, including SOC (StarOffice Colours, i.e. the OpenDocument standard used by OpenOffice.org and LibreOffice) and CorelDraw XML Palettes and more. For users who already have CorelDraw this should allow them to reuse their existing Pantone palettes.
They are also continuing their work to merge their SK1 and PrintDesign branches. The next release seems very promising.
I could be programming but instead today I am playing games and watching television and films. I have always been a fan of Tetris which is a classic, but I am continuing to play an annoyingly difficult game, that to be honest, I am not sure I even enjoy all that much, but it is strangely compelling. My interest in usability coincides with my interest in playability. Each area has their own jargon but are very similar, the biggest difference is that games will intentionally make things difficult. Better games go to great lengths to make the difficulties challenging without being frustrating, gradually increasing the difficulty as they progress, and engaging the user without punishing them for mistakes. (Providing save points in a game game is similar to providing an undo system in an application, both make the system more forgiving and the users allow users to recover from mistakes, rather than punishing and them and forcing them to do things all over again.)
There is a great presentation about making games more juicy (short article including video) which I think most developers will find interesting. Essentially the presentation explains that a game can be improved significantly without adding any core features. The game functionality remains simple but the usability and playability is improved, providing a fuller more immersive experience. The animation added to the game is not merely about showing off, but provides a great level of feedback and interactivity. Theme music and sound effects also add to the experience, and again provide greater feedback to the user. The difference between the game at the start and at the end of the presentation is striking, stunning even.
I am not suggesting that flashy animation or theme music is a good idea for every application but (if the toolkit and infrastructure already provided is good enough) it is worth reconsidering that a small bit of "juice" like animations or sounds effect could be useful, not just in games, in any program. There are annoying bad examples too but when done correctly it is all about providing more feedback for users, and helping make applications feel more interactive and responsive.
For a very simple example I have seen a many users accidentally switch from Insert to Overwrite mode and not know how not get out of it, and unfortunately many things must be learned by trial and error. Abiword changes the shape and colour of the cursor (from a vertical line to a red block) and it could potentially also provide a sound effect when switching modes. Food for thought (alternative video link at Youtube).
OpenRaster is a file format for layered images, essentially each layer is a PNG file, there is some XML glue and it is all contained in a Zip file.
In addition to PNG some programs allow layers in other formats. MyPaint is able to import JPG and SVG layers. Drawpile has also added SVG import.
After a small change to the OpenRaster plugin for The GNU Image Manipulation Program, it will also allow non-PNG layers. The code had to be changed in any case, it needed to at least give a warning that non-PNG layers were not being loaded, instead of quietly dropping them. Allowing other layer types was more useful and easier too.
(This change only means that other file types with be imported, they will not be passed through and will be stored as PNG when the file is exported.)
Summary: plugin updated to allow round-trip of paths.
The MyPaint team are doing great work, making progress towards MyPaint 1.2, I encourage you to give it a try, build it from source or check out the nightly builds. (Recent windows build Note: the filename mypaint-1.1.1a.7z may stay the same but the date of build does change.)
The Vector Layers feature in MyPaint is particularly interesting. One downside though is that the resulting OpenRaster files with vector layers are incompatible with most existing programs. MyPaint 1.0 was one of the few programs that managed to open the file at all, presenting an error message only for the layer it was not able to import. The other programs I tested, failed to import the file at all. It would be great if OpenRaster could be extended to include vector layers and more features but it will take some careful thought and planning.
It can be challenging enough to create a new and useful feature, planning ahead or trying to keep backwards compatibility makes matters even more complicated. With that in mind I wanted to add some support for vectors to the OpenRaster plugin. Similar to my previous work to round-trip metadata in OpenRaster I found a way to round-trip Paths/Vectors that is "good enough" and that I hope will benefit users. The GNU Image Manipulation Program already allows paths to be exported in Scalable Vector Graphics (SVG) format. All paths are exported to a single file, paths.svg
and are imported back from that same file. It is not ideal, but it is simple and it works.
Users can get the updated plugin immediately from the OpenRaster plugin gitorious project page. There is lots more that could be done behind the scenes, but for ordinary users I do expect any changes as noticeable as these for a while.
Back to the code. I considered (and implemented) a more complicated approach that included changes to stack.xml
, where raster layers were stored as one group, and
paths (vectors layers) as another group. This approach was better for exporting information that was compatible with MyPaint but as previously mentioned, the files were not compatible with any other existing programs.
To ensure OpenRaster files that are back compatibility it might be better to always include a PNG file as the source for every layer, and to find another way to link to other types of content, such as text or vectors, or at some distant point in the future even video. A more complicated fallback system might be useful in the long run. For example the EPUB format reuses the Open Packaging Framework (OPF) standard, any pages can be stored in multiple formats, so long as it includes a fallback to another format, ending with a fallback to a few standard baseline formats (i.e. XHTML). The OpenRaster standard has an elegant simplicity, but there is so much more it could do.
Summary: plugin updated to allow round-trip of metadata.
OpenRaster does not yet make any suggestions on how to store metadata. My preference is for OpenRaster to continue to borrow from OpenDocument and use the same format meta.xml file, but that can be complicated. Rather than taking the time to write a whole lot of code and waiting do metadata the best way, I found another way that is good enough, and expedient. I think ordinary users will find it useful -- which is the most important thing -- to be able to round-trip metadata in the OpenRaster format, so despite my reservations about creating code that might discourage developers (myself included) from doing things a better way in future I am choosing the easy option. (In my previous post I mentioned my concern about maintainability, this is what I was alluding to.)
A lot of work has been done over the years to make the The GNU Image Manupilation Program (GIMP) work with existing standards. One of those standards is XMP, the eXtensible Metadata Platform originally created by Adobe Systems, which used the existing Dublin Core metadata standard to create XML packets that can be inserted inside (or alongside) an image file. The existing code creates an XMP packet, let's call it packet.xmp
and include it in the OpenRaster file. There's a little more code to the load the information back in and users should be able to go to menu File, Properties
and in Properties dialog go to the tab labelled Advanced to view (or set) metadata.
This approach may not be particularly useful to users who want to get their information out into other applications such as MyPaint or Krita (or Drawpile or Lazpaint) but it at least allows them not to lose metadata information when they use OpenRaster. (In the long run other programs will probably want to implement code to read XMP anyway, so I think this is a reasonable compromise, even though I want OpenRaster to stay close to OpenDocument and benefit from being part of that very large community.)
You can get the updated plugin immediately from the OpenRaster plugin gitorious project page.
If you are a developer and want to modify or reuse the code, it is published under the ISC License.
Thanks to developers Martin Renold and Jon Nordby who generously agreed to relicense the OpenRaster plugin under the Internet Software Consortium (ISC) license (it is a permissive license, it is the license preferred by the OpenBSD project, and also the license used by brushlib from MyPaint). Hopefully other applications will be encouraged to take another look at implementing OpenRaster.
The code has been tidied to conform to the PEP8 style guide, with only 4 warnings remaining, and they are all concerning long lines of more than 80 characters (E501).
The OpenRaster files are also far tidier. For some bizarre reason the Python developers choose to make things ugly by default, and neglected to include any line breaks in the XML. Thanks to Fredrik Lundh and Effbot.org for the very helpful pretty-printing code. The code has also been changed so that many optional tags are included if and only if they are needed, so if you ever do need to read the raw XML it should be a lot easier.
There isn't much for normal users unfortunately. The currently selected layer is marked to the OpenRaster file, and also if a layer is edit locked. If you are sending files to MyPaint it will correctly select the active layer, and recognize which layers were locked. (No import back yet though.) Unfortunately edit locking (or "Lock pixels") does require version 2.8 so if there is anyone out there stuck on version 2.6 or earlier I'd be interested to learn more and I will try to adjust the code if I get any feedback.
I've a few other changes that are almost ready but I'm concerned about compatibility and maintainability so I'm going to take a bit more time before releasing those changes.
The latest code is available from the OpenRaster plugin gitorious project page.
Congratulations to Krita on releasing version 2.9 and a very positive write-up for Krita by Bruce Byfield writing for Linux Pro Magazine.
I'm amused by his comment comparing Krita to "the cockpit of a fighter jet" and although there are some things I'd like to see done differently* I think Krita is remarkably clear for a program as complex as it is and does a good job of balancing depth and breadth. (* As just one example: I'm never going to use "File, Mail..." so it's just there waiting for me to hit it accidentally, but far as I know I cannot disable or hide it.)
Unfortunately Byfield writes about Krita "versus" other software. I do not accept that premise. Different software does different things, users can mix and match (and if they can't that is a different and bigger problem). Krita is another weapon in the arsenal. Enjoy Krita 2.9.
OpenRaster Python Plugin
Early in 2014, version 0.0.2 of the OpenRaster specification added a requirement that each file should include a full size pre-rendered image (mergedimage.png
) so that other programs could more easily view OpenRaster files. [Developers: if your program can open a zip file and show a PNG you could add support for viewing OpenRaster files.*]
The GNU Image Manipulation Program includes a python plugin for OpenRaster support, but it did not yet include mergedimage.png
so I made the changes myself. You do not need to wait for the next release, or for your distribution to eventually package that release you can benefit from this change immediately. If you are using the GNU Image Manipulation Program version 2.6 you will need to make sure you have support for python plugins included in your version (if you are using Windows you wont), and if you are using version 2.8 it should already be included. (If the link no longer works, see instead https://gitorious.org/openraster/gimp-plugin-file-ora/ as I hope the change will be merged there soon.)
It was only a small change but working with Python and not having to wait for code to compile make it so much easier.
* Although it would probably be best if viewer support was added at the toolkit level, so that many applications could benefit.
meta.xml
in the zip container. A good idea worth borrowing, a simplified example file follows:.ora
to .odg
and be opened using OpenOffice* allowing you to view the image and the metadata too. The code is Pinta OraFormat.cs is freely available on GitHub under the same license (MIT X11) as Pinta. The relevant sections of are "ReadMeta" and "GetMeta". A Properties dialog and other code was also added, and I've edited a screenshot of Pinta to show both the menu and the dialog at the same time: OpenRaster is a file format for layered images. The OpenRaster specification is small and relatively easy to understand, essentially each layer is represented by a PNG image, and other information is contained written in XML and it is all contained in a Zip Archive. OpenRaster is inspired by OpenDocument.
OpenDocument is a group of different file formats, including word processing, spreadsheets, and vector drawings. The specification is huge and continues to grow. It cleverly reuses many existing standards, avoiding repeating old mistakes, and building on existing knowledge.
OpenRaster can and should reuse more from OpenDocument.
It is easy to say but putting it into practice is harder. OpenDocument is a huge standard so where to begin? I am not even talking about the OpenDocument Graphics (.odg) specifically but more generally than that. It is best that show it with an example. So I created an example OpenRaster image with some fractal designs. You can unzip this file and see that like a standard OpenRaster file it contains:
fractal.ora
├ mimetype
├ stack.xml
├ data/
│ ├ layer0.png
│ ├ layer1.png
│ ├ layer2.png
│ ├ layer3.png
│ ├ layer4.png
│ └ layer5.png
├ Thumbnails/
│ └ thumbnail.png
└ mergedimage.png
It also unusually contains two other files manifest.xml content.xml
. Despite the fact that OpenDocument is a huge standard the minimum requirements for a valid OpenDocument file comes down to just a few files. The manifest is a list of all the files contained in the archive, and content.xml is the main body of the file, and does some of the things that stack.xml does in OpenRaster (for the purposes of this example, it does many other things too). The result of these two extra files, a few kilobytes of extra XML, is that the image is both OpenRaster AND OpenDocument "compatible" too. Admittedly it is an extremely small tiny subset of OpenDocument but it allows a small intersection between the two formats. You can test it for yourself, rename the file from .ora .odg and LibreOffice can open the image.
To better demonstrate the point, I wanted to "show it with code!" I decided to modify Pinta (a Paint program written in GTK and C#) and my changes are on GitHub. The relevant file is Pinta/Pinta.Core/ImageFormats/OraFormat.cs
which is the OpenRaster importer and exporter.
This is a proof of concept, it is limited and not useful to ordinary users. The point is only to show that OpenRaster could borrow more from OpenDocument. It is a small bit of compatibility that is not important by itself but being part of the larger group could be useful.
It is not without emotion that the Document Liberation Project announces today the first release of the new framework library, librevenge-0.0.0
. This release means that the API of librevenge
is now set into a stone (at least until the 0.1.x series) and thus the library can be used by willing filter-writers.
You might be familiar with some aspects of the librevenge
framework from this blog or from this FOSDEM 2014 presentation. David Tardon started a nice serie of articles explaining how to use the framework. So, there are no valid excuses remaining not to use it and not to contribute to the world domination that is the ultimate destiny of the Document Liberation Project.
But the first release of a new framework would be empty without mentioning those on shoulder of whom we stand. First we would love to thank Will Lachance and Mark Maurer for having started more then 10 years ago the development of libwpd
. It is this library and its wise interface design that allowed us to move incrementally to the current framework. Thank you guys, you know that without you we would be nowhere!
Besides your servant, David Tardon, and Valek Filipov, we would love to single out a discrete person, who speaks little but codes a lot. It is Laurent Alonso, without whom we would never be able to recover a huge amount of old MacIntosh documents. We thank equally to all our past and present Google Summer of Code students, without whom the road would be much more thorny.
It would be a very big mistake if we did not thank the project from which we all originate, the LibreOffice project. The community gravitating around LibreOffice creates is caring, encouraging and creates the right environment to foster innovation.
Last but not least, our thanks go to The Document Foundation that did not hesitate to take us under its umbrella and provide all the necessary institutional support.
Now a new phase starts and you can be part of it! There are many ways to contribute. You drop by at the #documentliberation-dev
channel at irc.freenode.net
. There will always be someone to help you to join this exciting journey.
For more information about our activities, follow @DocLiberation on twitter, Join our Google plus community or like us on Facebook.
Also, I updated the PHP version on the hosting side (the hosting company did, I just clicked on the button in the panel). This cause some glitches with the antispam and the rest when commenting. Sorry about that.
I addressed the known issues, related to deprecated PHP functions. This is still easier than upgrading to the newer version of Dotclear that break the URLs.
Last week, over a nice diner in a nice Portuguese restaurant on "the main", I had a discussion with @pphaneuf about decentralised bug tracking. He had the idea first.
Since you have decentralised version control in the name of git
(there are others), couldn't we have the same for a bug tracker? Using sha1 instead of bug name isn't much different as you could use the abbreviated form. After all, Mozilla has reached the 7 digits bug numbers now.
The idea I proposed was something like carrying that metadata in a secondary repository inside that would be linked - or even better, in a different branch. Also there would be an equivalent to cgit to serve this data in a web interface and probably a few new git commands. The bug repository could be skipped on check out for those who don't want it.
And here goes bug fixing and triaging in a European airport or luxurious hotel wifi with proper access to the whole history.
The idea sounds crazy, but I think it can work. Let's call it buggit.
And no I'm not coding it. This is just small talk. And I haven't done due diligence in searching if something already existed but I like crazy ideas.
We are happy to announce that the LibreOffice project has 10 Google Summer of Code projects for this 10th edition of the program. The selected projects and students are:
Project Title |
| Selected Student |
Connection to SharePoint and Microsoft OneDrive |
| Mihai Varga |
Calc / Impress tiled rendering support |
| Andrzej Hunt |
Improved Color selection |
| Krisztián Pintér |
Enhancing text frames in Draw |
| Matteo Campanelli |
Implement Adobe Pagemaker import filter |
| Anurag Kanungo |
Improvements to the Template manager |
| Efe Gürkan YALAMAN |
Dialog Widget Conversion |
| freetank |
Dialog Widget Conversion |
| sk94 |
Improve Usability of Personas |
| Rachit Gupta |
Refactor god objects |
| Valentin |
We wish all of them a lot of success and let the coding start!
Corel released CorelDraw x7 on 27 March 2014. We had some time to look at the changes in file-format and we adapted libcdr
to be able to open it. The changes landed this week in LibreOffice code, in master and libreoffice-4-2 branch. That means that support will be available in the next 4.2.x release.
It is good to note that while introspecting the files we discovered a flaw in CorelDraw x7 that makes files using the Pantone palette number 30 pretty unusable for CorelDraw users. We worked it around and the files are opening just fine in LibreOffice. Take this as a first contribution by the new Document Liberation Project.
Hello, dear students!
This little blog is to remind you that in a bit more then 24 hours, the student applications for the 10th edition of Google Summer of Code will be closed. It is always better to submit an imperfect proposal before the deadline then to miss the deadline by 5 minutes with perfect proposal. So, check our Ideas page and hurry up with applying.
Open content is content that is also available openly.
The short: people claiming they don't blog anymore but write lengthy on the closed Google+, a platform that is closed (does not allow to pull the content of RSS), discriminate on names, and in the end just represent the Google black hole as it seems only Google fanboys and employees use it.
This also applies to Facebook, Twitter (to a lesser extent, just because of the 140 char limits) and so on.
Sorry this is not the Internet I want. It is 2014, time to take it back.
It all started by an innocent (?) question on 28th of November 2013. The inimitable Caolán asked whether anybody considered writing an import filter for AbiWord document format. And the distinguished readership of this blog knows well what makes your servant tick. So, the very evening, a skeleton was written and libabw
, a library to read AbiWord file-format, started. It was pretty exciting to write -- after a host of libraries for file-formats that are not documented anywhere -- a filter for a file-format of our cousin. There was a hope that existence of a reference implementation whose source code is widely accessible would make the endavour easy. It is undeniable that grepping for values of different enums made the work a bit easier. Nonetheless, a huge part of the work was still figuring out what is permitted in AbiWord and how a change of one parameter affects the rendering of a document. Other thing to find out was how to map the concepts in the ABW files into the libwpd
API that is heavily influenced by ODF concepts.
But the date of the start meant that soon came the Christmas and with it a possibility to spend some free time on the library. Eventually it became very usable and the import filter made it -- as a late feature -- into the LibreOffice 4.2 line and users of the upcoming LibreOffice 4.2.0 release.
The library currently supports both the plain xml ABW files as well as the gzipped ZABW files. The converted features include:
And since a picture speaks louder then hundred words, here are some screenshots:
![]() | ![]() | |
A sample ABW file openedin AbiWord | The same ABW file openedin the upcoming LibreOffice 4.2.0 |
![]() | ![]() | |
A sample (zlib compressed) ZABWfile opened in AbiWord | The same ZABW file openedin the upcoming LibreOffice 4.2.0 |
As you can see from the screenshots, the world domination that we are actively seeking is having several contenders. But if you believe that we are the closest to its realization, please join the filter-writing fun! Show up on #libreoffice-dev
channel at irc.freenode.net
. You are also encouraged to follow my twitter and Google+ accounts. And stay tuned for more exciting news in the near future. We can promise you that you will have a lot of fun in the growing community of LibreOffice filter writers.
Dear friends!
From the bottom of my heart I would like to thank you for your support during the past elections for The Document Foundation Board of Directors. Without you my election would be never possible and I never took it for granted. I am thankful for your trust. You cannot even immagine how happy and grateful I am for your support. Especially in a moment where my relationship with our project undergoes major changes.
I pray to be always up to the task to co-guide our project with wisdom and integrity.
I love you
Fridrich
The time has come when The Document Foundation will elect a new Board of Directors. As you might already know, there are many good candidates. And since I clearly think I am the best of them, I am writing this to ask you to vote for me. Some of you might know me a bit already, but it is never bad to present myself.
My name is Fridrich Štrba, national of Switzerland and Slovakia, happily married with Susan since more then 12 years and father of 3 wonderful children: Patrick (9), Miriam (6) and Nathanael (3).
My story with LibreOffice started around 2004, with its predecessor, OpenOffice.org. I was just trying to contribute to libwpd
which is the horse-power of our WordPerfect import and the OpenOffice.org integration was an interesting thing to contribute to. And since then, my love story with our project went through different stages, but we are still together and sometimes even happy.
I have been mentoring Google Summer of Code students since 2006 and recently I was co-responsible for several import filters for reverse-engineered formats (i.e. Visio, CorelDraw, MS Publisher). I can frankly say that my development and marketing work around the filters are a huge part of the reason why LibreOffice is called the "Swiss army knife of file-formats". We managed quite recently to bootstrap a vibrant community of filter-writers and the the amount of supported file-formats will only grow.
Between 2007 and 2013, I was highly blessed to be working on LO as my day-job, employed by Novell, then SUSE. Since September 2013, I am again a volunteer as many of you. This new-acquired independence is an advantage. I have no monetary interests of any kind in LibreOffice and, if elected, I will take decisions only and only considering the good of the project as such.
The advantage of my election would be that I am part of various native language communities. I speak several languages and can understand the aspirations of the corresponding communities. Besides that, I was part of the Membership Committee from 2010 and the last year, I was its Chairman. In this quality, I was able to push forward my vision of diverse and open and inclusive community that goes beyond personal sympathies or aversions. And this is the vision I desire to pursue if you give me your trust.
And since it is written "You don't have because you don't ask", with this message I ask you to cast your vote for me.
As many who follow the LibreOffice mailing lists know, soon we will have the elections for the Bord of Directors again. Without doubt, there will be a lot of good candidates and the choice will be difficult. Different competencies, personalities, sensibilities. As many parameters as there could ever be. Nonetheless, there is one parameter that was eliminated from before the first election: the corporate pressure.
From the very beginning of The Document Foundation, the Steering Committee and the initial Membership Committee knew that while corporations can contribute a lot to open source, they can also in some moments try to use the community bodies for their own interest. That is the reason that all elected bodies of The Document Foundation have the 30 per cent rule, where no more then 30 per cent of any body can have the same affiliation. In the same spirit, the election system was designed the way that it is technically impossible for anybody to know how a given member voted. From the experience with the "old good times" of OpenOffice.org, it was obvious that corporate influence can do a lot of harm and skew the elections in a considerable way. And even if the rule of 30 per cent is in place, it might be hard for a election officer or for a MC member to stand strong before a corporate pressure. And this was the reason why we chose a design that makes it impossible even for the election officer to know whom you voted for. This information is known only to you.
Long time not see, dear friends. But that does not mean that there is nothing to speak about. So, hence a new blog post for those that were wondering what was happenning in the reverse-straight engineering partnership.
After the moments in August and September, where I transitioned from working on LibreOffice to working on SuSE Linux Enterprise and after some breathing pause to give to the Cesar (or also known as family) what is belonging to Cesar, the activity on LibreOffice related stuff restarted in October. Just this time, during nights, weekends and other free time.
Sample Keynote presentation in LibreOffice 4.2
It is with a huge pleasure that I realized that we start to have a vibrant developer community around the libwpd/libwpg family, as well as around Valek&aposs reverse-engineering framework. SUSE Hackweek 10 helped me to produce an initial importer for Freehand file-format. Close to that, David Tardon of RedHat fame added a library to parse Keynote files and a library to convert different e-book file-formats. Laurent Alonso works like a bee on importing Microsoft Works spreadsheets (*.wks). Many exciting things in the pipeline, as you can see.
Wireframe of shapes from a sample Freehand drawing in LibreOffice 4.2
With the extension to presentations and spreadsheets, we decided that the time has come to simply break the super-stable libwpd/libwpg API and profit to make it even more future-proof and in the same token solve some of the API issues that were preventing us from importing correctly several features; most notable of which the Visio connectors.
librevenge
We decided to diminish drastically dupplication of code and we extracted from libwpd
, libwpg
and from libetonyek
the API classes along with the used types. We created a new library, librevenge
where we also added as sub-libraries the (structured) stream implementations that used to be in libwpd-stream
, as well as several classes that the libraries used to copy and paste between them. The structured stream implementations support now both OLE2 and Zip containers and the relevant libraries assume this. That means that we will have to eventually extend the WPXSvStream implementation in LibreOffice&aposs "writerperfect" module to cater for Zip too.
A new sub-library, librevenge-generators
has the simple implementations of the interface classes that we use to convert documents into html, text, or that we use to see the raw API calls for the purpose of regression testing. The exception is the RVNGSVGDrawingGenerator class. In the current stable branches, all of the libraries that convert graphics file-formats contain an SVG generator and they rely on its presence in several cases for things like fills with vector graphics. This class is thus not part of the librevenge-generators
library, but of the base librevenge
, which is a hard dependency of all of the converter libraries.
RVNGPropertyList
The base type for passing information using the API callbacks is RVNGPropertyList, which was born from libwpd
&aposs WPXPropertyList. We modified the design of this class the way that each atrribute can have as a value either a simple property or an array of RVNGPropertyList element. This allows us to do more or less all that JSON is able to do. The API classes are even more flexible and future-proof, since extending the information passed in the different callbacks will not modify function signatures.
Quality improvement
Although the relevant libraries were quite extensively regression-tested in the past, the new librevenge
extends the coverage of unit tests. We hope that this helps us to keep under control the basic functionalities without having to use the heavy regression tests on each commit.
Other effort is to avoid to copy in the API calls huge data structures. This effort will result in some performance improvements especially if a document contains a lot of shapes that are filled by different bitmap fills.
When will it be ready?
When it is ready! But seriously, we are trying to take our time and get the APIs right. Like this we intend to prevent gratuitous breakages of binary compatibility in the future. So, it will not be in LibreOffice 4.2 for sure.
If this is interesting for you, please drop by at #libreoffice-dev
channel at irc.freenode.net
in order to meet us. We cannot promise you that you will become rich, but we can guarantee you fame and eternal gratitude
Last week-end, Mozilla held its summit in 3 locations: Santa Clara, Toronto and Brussels. The summit is where contributors paid (employees) or not (volunteers) meet and discuss the future of Mozilla and how we are gonna help shape the web. We call them (ourselves) Mozillians.
I attended in Brussels and it was for me the occasion to meet fellow Mozillians for the first in face to face, and to meet other I had never interacted with. I'm reaching my two years as a Mozillian (and paid contributor) and I see a huge value in this. I found that we have a very friendly and vibrant community, spread across the globe, people passionate about the web, passionate about the users and the future of the web, from developers, designers, artists, translators to evangelist, marketing and administrative support. The full spectrum was represented.
I can't wait to attend the next Mozilla summit, in the mean time I'll attend the Gnome Summit that is being held tomorrow in the city I call home: Montréal.
Also I need to go through the 1900 pictures I took during the event. In the mean time you can watch that set on Flickr that contain the stuff I posted on Instagram almost immediately, as well that the Flickr group Mozilla Summit 2013 I created to pool the pictures from other users (feel free to add yours if you haven't already).
This week I am at the Toronto Mozilla office. With Mike and Alan we were discussing information entropy and backups and devised the craziness of doing a hard drive backup onto paper, using QR Codes.
Alan and Mike did the math.
For one TB, it would take 44 trees, and at 20 pages per minute, it would take 123 days to print the 3.6 millions letter-size single sided pages, at 300dpi, in large-size, using the highest-redundancy QR codes.
Now you know how much information we create and how much it would take to make it last longer than the electronic device it is stored on.
Just a service announcement for those that might have around still SXW files generated from WordPerfect documents by the wpd2sxw
tool version 0.6.x or earlier (years 2004 and before). Those files used to open fine in early OpenOffice.org versions, but they miss a crucial element. That is the reason why LibreOffice, the OpenOffice.org modern successor, will refuse them. Nevertheless, they are not lost!
LibreOffice development team, in its constant quest of increased user satisfaction, has a workaround for you!
First grab the zip file with the required manifest. Then get the zipmerge
tool that comes with libzip
, and merge the manifest into the corresponding SXW file. As an example, this command line could work:
for i in <sxw-file-list>; do zipmerge temporary_sxw.sxw /path/to/sxw_manifest.zip $i && mv temporary_sxw.sxw $i; done
This way you assure that if the original SXW file already had a manifest, it will not be overwritten by the one from sxw_manifest.zip
, which would not be a desirable outcome. Nonetheless, if you only have to repair one SXW file and you checked already that manifest is missing in it using tools like zipinfo
, you can quietly use:
zipmerge <original-sxw-file>.sxw /path/to/sxw_manifest.zip
In order to merge the manifest directly into that file. Naturally, you can merge the manifest from the sxw_manifest.zip
into the SXW file using any other zip-manipulation tool you prefer.
Enjoy and continue using LibreOffice, the free and open source office suite of reference.
LibreOffice is sometimes regarded as the Swiss army knife when it comes to opening office file-formats. Although it might be a slight exaggeration, it is a point of honour of the development team to try to allow users to load into the suite as many of their documents as possible. Every major release from the first LibreOffice 3.3 came with new and improved import filters, often for file-formats that are under-documented, if any documentation can be found at all. In this article, we would like to present the way import filters interface with LibreOffice and give to an interested developer a starting point for adding her favourite file-format among those LibreOffice is able to open.
Filters creating documents directly into LibreOffice internal structures
In general, an import filter's task is to parse the foreign document, extract from it useful information, and feed it to the application in a way it can understand. Many internal filters, like the MS Word filter, use a direct way of communicating with LibreOffice. They import the document directly into the internal structures that represent those documents. The advantage of this approach is the lack of intermediary: the document is immediately understood by the application and no additional processing is needed. The disadvantage is that this approach requires an intimate knowledge of the internal structures used and has thus a steep learning curve. The next two types of filters will correspond better to a developer that does not want to dive too deep into LibreOffice internals, yet wants to have his work done.
OpenDocument format as an interchange format
Who has not heard about OpenDocument? Hardly anybody ignores its existence. But it is also a convenient interchange format for filter writers. No need in this case to understand the LibreOffice internals apart from some hundred lines of boilerplate code that are documented in various places. It suffices to read the source document and generate a "flat" OpenDocument representation of it. LibreOffice is able to load this kind of representation as if it was loading an ODF document.
XSLT filters
The easiest way to write a filter for an XML-based file-format is using the XSLT filter dialogue. All you need is to have an XSL transform that converts a foreign XML-based file-format to the "flat" ODF, for import filters; and that converts a corresponding ODF XML to the XML used by the foreign file-format, for export filter. Once those transforms exist, the integration with LibreOffice can be done using the user interface.
Picture 1
In the Tools menu, chose XML Filter Settings, you will see listed all the XSLT filters that are already present in your LibreOffice installation along with the information about the application that is supposed to receive the resulting ODF document. Other information that can be found is about the direction of the conversion. Is it an import filter, export filter, or a filter that can import and export a foreign file-format.
If you click at "New", this dialogue will appear.
Picture 2
In the "General" tab, you will be able to chose the user-visible information about the filter: its name, the application that will receive the converted document (for instance LibreOffice Calc (.ods) for a spreadsheet converted to the OpenDocument Spreadsheet format). This information is also used by LibreOffice to group different types of documents. If you chose presentations in the file-picker and your filter specifies that it is converting into the LibreOffice Impress application, then all files having the file-extension associated with the file-format will be shown in the list.
In the "Name of file type", you will be able to describe the file-format that your filter will handle and in the "File extension" field, you will need to put semicolon-separated list of possible extensions for files in the given file-format. For instance, the extensions for the files in Microsoft Excel 2003 XML file-format will end typically with extensions xml or xls. You can add a comment in the "Comments" field. This last field is optional and you can leave it empty if you desire.
Picture 3
The next tab is the actual information about the XSL transformations that will do the conversion. The DocType field makes sense principally for import filters. The XSLT filters typedetection will scan for the string you enter there in the first 4000 bytes of the file. Since the typedetection searches for this string only in those first 4000 bytes, it is necessary to assure that the string one specifies can be found invariably in the very beginning of the file. You can leave the field empty if you desire. Then the typedetection will be done purely on the basis of an extension.
If you are writing an export filter, you will provide in the "XSLT for export" field the transform that will do the conversion from the OpenDocument XML to the file-format for which you write your filter. If this field remains empty, LibreOffice will know that you filter is not an export filter. The same is valid for the "XSLT for import" field. It will contain the path to the XSLT sheet that does the import transformation. Leaving it empty is telling LibreOffice that your filter is not an import filter. There are already several filters bundled with LibreOffice that do conversion only in one direction. For instance, the XHTML filters or the MediaWiki filter are used only to export to the corresponding file-formats.
You also have the option to specify the default template for filters that import from file-formats that don't carry style information. For instance, the bundled DocBook filter uses a template to specify styles of different outline levels. If you don't specify the template, there are two possibilities. Either your transform creates a document with full styles, or you rely on the default styles that LibreOffice uses.
The check-box "The filter needs XSLT 2.0 processor" is to be checked only if your transforms use some exclusive 2.0 features. It is nevertheless advisable to write xslt sheets of the 1.0 version. They are much simpler and, because of the performance issues of other xslt processors out there, LibreOffice uses under the hood libxslt. The fact that libxslt, has only limited support of the 2.0 features is widely offset by the performance improvement that its use brought.
Now, you are done with the integration of your filter, the dialogue in the Picture 1 allows you to test your transforms, and even to export your filter as an extension package and deploy it on different installations of LibreOffice or to distribute it over our extension web-site http://extensions.libreoffice.org
As you can see, the integration of an XSLT-based filter into LibreOffice is rather simple. That is the biggest advantage of this approach. Nevertheless, there are also some disadvantages. Despite of the migration of the XSLT engine to a relatively fast libxslt, the use of xsl transforms on large document can be relatively slow. Another disadvantage is that the transforms are not really good at converting documents where the concepts of the source and target file-formats cannot be easily mapped.
XFilter framework
The XFilter framework is the other way to integrate import filters with LibreOffice. In fact the previous XSLT-based filters use an intermediary layer that uses this framework too. The advantage of using the XFilter framework directly is the use of higher lever programming languages that allow much easier mapping of incompatible concepts, parsing of documents in several passes, as well as much more complex processing of gathered information. Moreover, this is the way to use if you need to write a filter for a file-format that is not XML-based, since the XSLT-based filters cannot be use to convert binary document file-formats.
The use of the XFilter framework is a bit more complicated then the use of the XSLT-based filter dialogue. Nevertheless, it is far from being rocket science. We will examine the steps needed for a typical import filter using the example of the recently added Microsoft Publisher filter in LibreOffice 4. For the sake of simplicity, we first start with the configuration files. You will need to craft two xml fragments, one for the filter description and one for the file-type.
Filter description:
<node oor:name="Publisher Document" oor:op="replace">
<prop oor:name="Flags">
<value>IMPORT ALIEN USESOPTIONS 3RDPARTYFILTER PREFERRED</value>
</prop>
<prop oor:name="FilterService">
<value>com.sun.star.comp.Draw.MSPUBImportFilter</value>
</prop>
<prop oor:name="UIName">
<value xml:lang="x-default">Microsoft Publisher 97-2010</value>
</prop>
<prop oor:name="FileFormatVersion">
<value>0</value>
</prop>
<prop oor:name="Type">
<value>draw_Publisher_Document</value>
</prop>
<prop oor:name="DocumentService">
<value>com.sun.star.drawing.DrawingDocument</value>
</prop>
</node>
The oor:name
attribute gives the name of the filter used internally. This name is important because the file-type and a corresponding filter are linked using it. As to the flags, I will mention here only two or three. The others can be used just as they are. The IMPORT flag specifies that we are implementing an import filter. For export filters, the flag is EXPORT and both flags are present for a bi-directional filter. The ALIEN flag is indicating that the filter handles a non-native file-format from the point of view of LibreOffice. When used with EXPORT flag, on export to the given file-format, it will trigger a dialogue warning about a possible data loss.
The FilterService property specifies the service that will be used for converting of the document. It is necessary that it corresponds exactly to the implementation name of your import filter. Since the filter is a so-called UNO component, it uses the java-like naming. The part com.sun.star.comp.Draw indicates that the filter is a component and converts a drawing and the MSPubImportFilter is the actual name of the filter.The UIName indicates a name that will appear in the file-selection dialogue for file-formats where none of the typedetections is able to detect them.The DocumentService property specifies which service will receive the result of the conversion. Here we are converting the Microsoft Publisher files into LibreOffice Draw as a drawing, that is why the document service will be the com.sun.star.drawing.DrawingDocument
. If we were converting a text document, the document service would be the com.sun.star.text.TextDocument
.
The Type property specifies the file type that the filter handles. This value is important because it must correspond to the oor:name attribute of the corresponding file-type description. It is necessary that the the name of the file-type starts with the indication of the receiving application. Here we use the draw_Publisher_Document
and for instance for the Wordperfect file-format, we use in LibreOffice the writer_WordPerfect_Document
. But lets profit from this and have a look at the second xml fragment, the file-type one. Here is one that corresponds to our example:
<node oor:name="draw_Publisher_Document" oor:op="replace">
<prop oor:name="DetectService">
<value>com.sun.star.comp.Draw.MSPUBImportFilter</value>
</prop>
<prop oor:name="Extensions">
<value>pub</value>
</prop>
<prop oor:name="MediaType">
<value>application/x-mspublisher</value>
</prop>
<prop oor:name="Preferred">
<value>true</value>
</prop>
<prop oor:name="PreferredFilter">
<value>Publisher Document</value>
</prop>
<prop oor:name="UIName">
<value>Microsoft Publisher</value>
</prop>
</node>
The DetectService specifies a service that is able to determine whether a document is of the given file-format. In our case, the com.sun.star.comp.Draw.MSPUBImportFilter
is able to do both, the conversion and the type-detection. In the Extensions property, semi-colon separated values indicate possible extensions for file of the given file-format. In the case of an export filter, the first extension in the list is used for saving with automatic file-extension enabled. The MediaType property basically specifies the mime-type of the file-format. The other element that links the file-format with the corresponding filter is the PreferredFilter property. LibreOffice will invoke the "Publisher Document" to convert the document if the typedetection identifies it as "draw_Publisher_Document". As to the UIName, it specifies the way the document format will be referenced in the list of file-formats in the file-picker.
Now we finished the crafting of the configuration files. It is time to create a boilerplate C++ code. Our filter not only converts from Microsoft Publisher files, but is also able to determine whether a given document is a file-format it can import. For this purpose, it has to support two services: "com.sun.star.document.ImportFilter" and "com.sun.star.document.ExtendedTypeDetection". If we were implementing an export filter, we would have to support also the service "com.sun.star.document.ExportFilter". Besides the com::sun::star::document::XFilter interface that both are bound to implement ExportFilter service must also implement the com::sun::star::document::XExporter interface and ImportFilter has to implement the com::sun::star::document::XImporter. For initialization, the filter must also implement com::sun::star::lang::XInitialization. And since the filter implements UNO servies, it should also implement the com::sun::star::lang::XServiceInfo interface.
But, let us concentrate on the interfaces that are specific to the import filter. The XFilter interface has two functions, the filter and cancel. In our example we will implement the cancel() as a do-nothing function. As for the filter function, it is the one that will do the actual filtering.
sal_Bool SAL_CALL MSPUBImportFilter::filter(const Sequence<PropertyValue> &aDescriptor) {
First, we will have to get the reference to the InputStream that represents the document we want to import. The aDescriptor is a sequence of pairs consisting of the value name and the actual value. The operator>>= will extract the value from the UNO Any (that can contain values of different types) into a variable of the requested type.
sal_Int32 nLength = aDescriptor.getLength();
const PropertyValue *pValue = aDescriptor.getConstArray();
OUString sURL;
Reference <XInputStream> xInputStream;
for (sal_Int32 i = 0; i<nLength; i++)
if (pValue[i].Name == "InputStream")
pValue[i].Value >>= xInputStream;
Next we will have to specify the import service that will receive the converted document in the form of SAX messages. The com.sun.star.comp.Draw.XMLOasisImporter service is a service that receives the OpenDocument Graphics XML.
OUString sXMLImportService ("com.sun.star.comp.Draw.XMLOasisImporter");
Reference <XDocumentHandler> xInternalHandler(
comphelper::ComponentContext(mxContext).createComponent(sXMLImportService),
UNO_QUERY);
The XImporter sets up an empty target document for XDocumentHandler to write to.
Reference <XImporter> xImporter(xInternalHandler, UNO_QUERY_THROW);
xImporter->setTargetDocument(mxDoc);
At this point, there is enough to plug into a filter that will read the xInputStream and write the resulting XML into the xInternalHandler. On success of the filtering operation, the filter function should return true and false on failure. After the implementation of this filter function, we will have to implement XImporter's setTargetDocument function.
void SAL_CALL MSPUBImportFilter::setTargetDocument(const Reference <XComponent> & xDoc)
{
mxDoc = xDoc;
}
In our case we just keep the Reference to XComponent in a member variable that we used in the previous snippet to set up an empty target that receives our imported document. And that would be all for the integration of an Import filter. For an export filter we would have to implement also the XExporter's setSourceDocument that is basically symmetrical to XImporter's setTargetDocument.
It is good to note that another way of integrating of filters into LibreOffice could be using the com::sun::star::xml::XExportFilter and com::sun::star::xml::XImportFilter interfaces that are grosso-modo equivalent to the described method. The difference is that the FilterService in the configuration xml file will be in this case always com.sun.star.comp.Writer.XmlFilterAdaptor and the actual filter component, as well as the target and source services are specified in the configuration file in the UserData property. But this is just for an anecdote, since the method I described in detail is much more generic.
When we were creating the xml configuration files, we said that the com.sun.star.comp.Draw.MSPUBImportFilter
component is able to do also the type-detection. For that purpose, it must support the com::sun::star::document::XExtendedFilterDetection interface, and thus its detect function.This function should return the string corresponding to the type name in the configuration file if it detects the document and an empty string for the cases when it is not able to identify the document.
OUString SAL_CALL MSPUBImportFilter::detect(Sequence <PropertyValue> &Descriptor)
{
OUString sTypeName;
sal_Int32 nLength = Descriptor.getLength();
sal_Int32 location = nLength;
const PropertyValue *pValue = Descriptor.getConstArray
As in the filter function we need to extract from the sequence the InputStream that we will examine. There is one difference, we will keep the reference of the TypeName property, so that we can fill it with the name of the type in case we detected it. The detect function should fill the variable sTypeName with the right string in case the detection was successful. And it is in this case that we will specify this information to the Descriptor and return the name of the type.
if (!sTypeName.isEmpty())
{
if (location == Descriptor.getLength())
{
Descriptor.realloc(nLength+1);
Descriptor[location].Name = "TypeName";
}
Descriptor[location].Value <<= sTypeName;
}
return sTypeName;
}
It would be not true to say that this is all that is needed to integrate a filter into LibreOffice. There are still some ten to fifty lines of code needed for the implementation of the generic UNO boilerplate, an xml file for the UNO component registration during the build and some makefile changes. Nevertheless, those changes are just trivial and can be done by mimicking existing filters like those in the writerperfect module of the LibreOffice code.
Getting involved
Free software is about people and the LibreOffice projects values highly all contributors, regardless of the size of their contribution. The community is thrilled to welcome anybody that wants to lend hand to make the software better. And why not you? If you think that writing filters for LibreOffice is enough fun for you, there are plenty of dedicated developers ready to help you either on the developer list libreoffice@lists.freedesktop.org or on IRC at #libreoffice-dev channel of the Freenode server. Just drop by and we will help you to write your first filter. We guarantee that you will enjoy and stick with the project.
Just a quick note to announce that I released Exempi 2.2.1. It was long overdue. It is mostly a couple of bugfixes.
Note: so that there is no misunderstanding, since people see this on Planet Mozilla, this is not a Mozilla project. But it is completely Free Software.
Here is the short Changelog
Next release will be 2.3.0 and will integrate the latest Adobe SDK used in the Creative Cloud.
There is no question about that.
I just switched from an Android phablet made by Samsung, device I came to hate for many reasons, to a Firefox OS Geeksphone Keon. That was my second Android phone, I switched because I got it for free[1], needed a carrier that worked better than the failure that is WIND Mobile on which I was using my Nexus One[2] and said Nexus One was just abandoned in OS upgrade by HTC AND Google after 22 month. I have to admit I missed the Nexus One, still, as Samsung didn't make Android better, quite the opposite.
Back to the point. I got that Geeksphone Keon, provided by my employer: Mozilla.
This is not a review of the phone, BTW, and all of this also applies to the just released Firefox OS phone in Spain.
On my Android phablet[3] I used 4 applications: the web browser, a twitter client (not Twitter's own though), Instagram and Foursquare.
On my Firefox OS phone, I had to scrap the last two. Why? Because despite requiring an internet connection and having some sort of web interface, their are unusable on the web.
Web browser
On Android I used Firefox for Android as my web browser. It is currently the best solution for web browsing is designed to protect your privacy and to run on more devices than Google's own Chrome. Call me biased if you want but truth is I have been using Firefox on the desktop too.
Firefox OS web browser is basically the same thing.
Twitter is a bit hurtfull. It is designed from the ground up to be used as a web application. Twitter has a mobile version that is meant to work well on small screen. They even have a packaged version for the Firefox OS Marketplace. Where it hurts is that Twitter web UI remains awful, either deliberately (given that the iOS client is awfull too) or because we got spoiled by third-party clients. On Android I was using Twicca (no source code) or Twidere (broke a bit at one point), but it should be noted that Twitter gave the finger to third parties when they added restriction on the development of client ; as well as bickering with Instagram to not show their content inline.
They get almost full marks for being a web app and treating it as first class.
Foursquare
On the desktop, if you go to Foursquare you get a decent web application, albeit you can do the major feature that Foursquare calls for: check-in.
On mobile, if I visit the website on Firefox for Android I get prompted to download an app.
On Firefox OS it is worse. Looks like their detection fail and they offer the desktop website that is mostly unusable on such a small screen. I filed bug 878132 for our tech evangelism to eventually have a look at.
Seems like they didn't go all the way to make it relevant on mobile web. Sadly. What was an experiment I started by the end of last year when I signed up for the service stopped here right at Firefox OS. It seems that I don't need it. They lost a user.
This one is the worst of the worst. First and foremost their web interface for desktop is very limited. Secondly, it doesn't scale at all on mobile - some content scale better than other. Third, they bickered with Twitter so that their content is not viewable inline.
Why does that last one matter? Try viewing the instagram content in the Twitter mobile web client.
I give a F as a mark.
Conclusion
Simply make your mobile app web based. It will run on iOS, Android, Firefox OS, Blackberry, etc. and people will be able to follow when they change phone and you won't need to spend a lot of resources for each platforms.
Also if you really want to have a packaged app, remember there are technologies like PhoneGap whose purpose is exactly that.
[1] minus the money I had to spend for unlocking it, thanks to consumer protections that don't exist in Canada
[2] first and foremost I didn't have service at the office downtown. second I was in the process of moving to Montréal where they don't have service anyway
[3] in case you didn't realize I call it phablet because it is a small tablet that one can use as a phone. Too big for your pocket, too small to be a good tablet, the worst of both worlds. It would never have been my choice ; but one doesn't simply look into the gifted horse's mouth.
Attentive reader of this blog remembers that, besides improvements in the most frequently used file-formats, each major release of LibreOffice adds to the list of document file-formats that are freed from the dungeon of vendor lock. In a collaboration with re-lab's Valek Filippov and (then GSoC student and now Lanedo's LibreOffice developer) Eilidh McAdam, LibreOffice 3.5 brought the possibility to open and see the most commonly used Visio files to the FLOSS world. LibreOffice 3.6 was able to claim the most comprehensive coverage of CorelDraw file-format with the ability to open even the oldest CorelDraw 1 and 2 files that modern versions of CorelDraw are not able to open any more.
The latest major release of LibreOffice was also full of goodies. First, the fruitful collaboration of re-lab's Valek Filippov with (then GSoC student and now amazon.com employee) Brennan T. Vincent produced the first ever possibility of reading Microsoft Publisher files in the FLOSS world. Second, with the advent of Microsoft Office 2013 and change in the Visio 2013 file-format, LibreOffice extended the coverage of Visio file-format to all files any version of Visio ever produced.
LibreOffice 4.1 release is approaching quickly. And that is an excellent news for bad teenage poetry and other literary production from the late 80s and early 90s. With the up-coming new release, LibreOffice extends support for a host of pre-OSX MAC text formats. This is a result of a continuous effort to open as many legacy file-formats as possible to our users, and help them to settle for ODF.
This particular improvement was possible thank to the integration of libmwaw
written by Laurent Alonso, LibreOffice contributor and already co-maintainer of libwps
and of the Microsoft Works import filter inside LibreOffice. The horsepower doing the conversions, libmwaw
is one of the libraries from the libwpd
family. In the same way as libwps
, libmwaw
reuses libwpd
's interfaces and the ODF generator classes in libodfgen
in order to convert its callbacks into an xml stream in flat ODF file-format. The import filter lives in the module writerperfect
.
The supported file-format include Microsoft Word for Mac from versions 1 to 5.1, Mac versions of Microsoft Works, different versions of ClarisWorks and AppleWorks, to name but a few. The list of supported file-format and of imported features is increasing literally every day. This promises further good news with every minor release of LibreOffice 4.1. More teenage poetry and bad litterature will be freed from the pit of discontinued software products.
After having found a way to get screenshots of some sample documents in their respective generating application, we are able to satisfy those readers that are hungry for pictures. First is a sample document in Mac Word 5.1 (1992) file-format opened in the originating application and in the up-coming LibreOffice 4.1:
![]() | ![]() |
Following is a simple document with picture produced by Write Now 4.1 from about 1993. It demonstrates the reason why LibreOffice is frequently called the "Swiss Army knife" of file-formats:
![]() | ![]() |
Following is an example of conversion of a document in MacWrite Pro 1.5 file-format from 1994:
![]() | ![]() |
And, last but not least is an example of conversion of a wordprocessing documents in AppleWorks 6.0 from the late 90s. The software was discontinued by Apple with the end-of-life of their PowerPC series. But LibreOffice can resurrect your documents:
![]() | ![]() |
Pretty exciting news! But the most exciting thing is that you can be part of this adventure. Join the fun by submitting bugs or by fixing your personal itches. So, if you want to help, patches can be sent to libreoffice-dev
mailing list. And, do not forget to find a way to join the #libreoffice-dev
channel at irc.freenode.net
in order to meet other developers. We can promise you that you will have a lot of fun in the LibreOffice community.
C++ 11 is now available in both gcc
and clang
. That mean it is really available where it matters.
Using C++ 11 in your project (with autoconf).
First if you use autoconf, you have to detect it. The autoconf archive has a macro. Download the .m4 definition and put it in your m4
directory in your project.
In the configure.ac
, add the following line:
AX_CXX_COMPILE_STDCXX_11(noext,mandatory)
Make sure it appears after
AC_GNU_SOURCE
This is will make configure detect C++11 support, without GNU extension (I tend to avoid these in general) and fail if it doesn't exist. If you prefer to make it optional, read the above documentation that has more details.
The interesting features
I'm interested in several features from C++11.
auto
to automatically deduct the type where it can. Ever gotten annoyed by the long type name for iterators of containers? Just use auto
instead.std::for_each()
.std::tr1::
. Just replace with std::
std::bind
and std::function
to replace Boost own versions.There are more, I'll talk about it when I get to look at them.
There is now again the period of the year when the results of Google Summer of Code selection are public. As for LibreOffice project, we have got 13 slots this year. We love you, Google! We really do!
Nonetheless, we had much more good applications then the slots and we had to do hard choices based on a variety of parameters. And the final line-up that came out is:
Project | Student | |
Adding alterating row coloring to database ranges and supporting new structured reference syntax | she91 | |
Code completion in the Basic IDE | stalker08 | |
Extend support for Document Management Systems | Cuong Cao Ngo | |
Implement Firebird SQL connector for LibreOfficeBase | Andrzej Hunt | |
Implementing an about:config functionality | Efe Gürkan YALAMAN | |
Implementing Proper Table Styles in Writer | Ivan Nicolae-Alexandru | |
Impress Remote Control for iOS | LIU Siqi | |
Improve toolbars in LibreOffice | Prashant Pandey | |
Improved Android / Impress Remote Control | Artur Dryomov | |
Slide Layout Extendibility | Vishv Brahmbhatt | |
Use Widget Layout for the Start Center | Krisztian Pinter | |
VLC integration into LibreOffice | Minh Ngo | |
Writer: Border around characters | Zolnai Tamás |
Congratulations to the selected students. We expect you to be bonding hard during the community bonding period that just started. Your presence on IRC and even start of the hacking is required now!
For the students that unfortunately could not be selected, do not be discouraged. Your Easy Hack patches made a real difference, sorry it did not work out this time. The LibreOffice community is always welcoming and you can learn a lot just by staying around and working at your pace on your chosen Easy Hack.
Mozilla is 15 and that's 15 years of fighting for the open web. I remember the source code release, I built it on in Pentium 166 with 64MB of RAM - a Debian box. I maybe less RAM than that, I forgot. It was huge.
Since, the web has gone forward big times, and Firefox helped users to take back the web by bringing down the IE supremacy and focusing on a standardized web technology.
I have great hopes for the future of the free web.
Google did shutdown Reader, their feed aggregator. Speculation is that it is to promote the use of the proprietary publishing silo that is Google+, and I'm not saying as a Google+ grudge I might hold, I actually believe it might be one of the considerations.
Imagine a second if all the content was pushed exclusively to a popular silo like Twitter, Facebook and Google+: it would be confined to these environments and people wouldn't be able to aggregate elsewhere. Now what if one of these hugely popular silos disappeared. It has happened, it can happen again, I have numerous examples. And I am still look for the Google+ or Facebook feeds, while it is clear that Twitter already removed them.
With RSS[1] all we need is a different aggregator to pull the feed. It would still work. And that's what happening with Google Reader user base: they are moving to other platforms that offer the same feature, either web based, or using desktop software.
Let's have this a learning step and continue to focusing on open standards for publishing. Let's continue to provide feeds. Let's continue to request feeds. And more importantly, us software hackers, let's continue to provide awesome libre software to do the job and on which we can reliably build upon.
[1] this include ATOM and other variation of feed publishing based on open standards
It is the new year. We have a tendency to put artificial starting points in time to want to (start to) do things, something like the "new year resolutions". I don't really abide to that because I believe you should do things when you want to, have to or can. You don't need a January 1st or some sort. This year it happens that the new year almost coincide with my timeline. Two weeks into the new house in Montréal, this mean that for once I can use that as the starting point ; or not.
Anyway.
Happy new year, and remember, be excellent to each other !
libvisio
library underwent heavy re-factoring as we started to understand more and more details about the underlying file-format.XML
-based Visio file-formats, namely the "XML Drawing" also known as *.vdx
; and the Microsoft Visio 2013 new file-format, known as *.vsdx
.File opened in Visio 1.0 | The same file opened in LibreOffice 4.0.0 beta1 | |
![]() | ![]() |
VSDX File opened in Microsoft Visio 2013 | The same file opened in LibreOffice 4.0.0 beta1 | |
![]() | ![]() |
It has been a long time without communicating with the distinguished readership of my blog. There was a hard decision to be made between producing code and producing literature. The code won until now. But now I have found a time to lift my head up from the coding, so the literature is back.
Many of you might be wondering what happened since my post about the text support in CorelDraw files from last June. Things are going pretty well. Since the CorelDraw import filter was released with LibreOffice 3.6, the users started to use the feature and report bugs. We were working on fixing them and improving the libcdr
's quality.
Quick overview of reverse-engineering process
From my discussions with our users and developers on-line and during some of the conferences that I attended, I realize that there is a slight misunderstanding in the large public about how the reverse-engineering works. So, here are some thoughts that may help understand it a bit more:
At the beginning of the process, there is a file-format. We don't know anything about its internal structure. There is no documentation whatsoever about it. One tries to generate a file in this file-format and examine it in hexadecimal viewer. Next, one tries to operate some little change in the document and examine what changed in the file itself. Eventually after many iterations, one might find regularities and some structure that helps to divide the file into several sections or blocks of more manageable size. It is essential in this phase that one can encode this information into some kind of introspection tool, since a plain hexadecimal viewer is not a very productive tool in the long run. We use for introspection of documents Valek Filippov's oletoy
, a python tool that stores our knowledge about the structure of different file-formats.
Once there is enough information about how to parse the document structure, the next target becomes to get some visible results. In order to save time and get visible results in a short time, all libraries such as libcdr
or libvisio
, use the libwpg
's interface. Reusing this interface means a considerable saving of time, since there are already working generators of ODG and SVG from the callbacks of this interface. Having visible results soon in the development/reverse-engineering cycle also allows visually asses the import results and correct them if necessary. Eventually, one can realize the absence of necessary information and try to go back to reverse-engineering to find it.
The support of reverse-engineered file-formats is a constant work-in-progress. A subtle dance between implementation and information digging. In this process, the user feedback is an essential element. The theories about the meaning of some information inside file hold only until a file comes to falsify them. Even a complex file generated by a developer is easily beaten by real life documents. And each file that shows a "weird" bug is advancing the understanding of the file-format. Let us look at this example:
After the release of LibreOffice 3.6.1, we got a not so good assessment of the quality of the CorelDraw import filter in the heise.de' c't review. Those of you that understand German can delight in the nuanced evaluation:
Ein neuer Import-Filter in Draw öffnet jetzt auch CorelDraw-Dateien, was uns im Test allerdings nur mit sehr einfachen Zeichnungen fehlerfrei gelang. In dieser Form ist er schlicht unbrauchbar.
Which can be mildly translated into English (given the understatements so common in en-GB):
A new import filter in Draw opens now also CorelDraw files, which we managed to do without errors only with very simple drawings. In this form, it is rather unusable.
Since we are really concerned about the quality of our software, we are thankful for any bug report whether it is brought to us in a friendly or other manner. This specific bug report helped us to understand how are stored in newer CorelDraw files chains of matrix transforms. And since a picture speaks louder then thousand words, compare the document c't was refering to opened in LibreOffice 3.6.2 and then in LibreOffice 3.6.3, after we fixed the position bits.
File opened in Libreoffice 3.6.2 | The same file opened in LibreOffice 3.6.3 | |
![]() | ![]() |
So feel encouraged to submit bugs against the CorelDraw import filter, or — even better — send us patches for your favorite itch.
Broken Lock by lyudagreen, on Flickr
A big North American online travel booking system still store passwords in plain text. Worse: they claim they take your security seriously. Here is the excerpt of the confirmation email you get when you register:
USERNAME: USER@EMAIL.DOMAIN PASSWORD: We're serious about security. Since your password is confidential, we won't repeat it here. However, if you ever forget your password, you can always request a reminder
Yes, the email has been capitalized.
The other day I wanted to book some airline tickets, so I returned to the website. I had forgotten the password. No biggie, I follow the "lost password procedure" and chose the "email" instead of the still idiotic "security question".
Guess what? I didn't get a link to reset my password, or a temporary password. No. I got my password sent in plain text. Worse. It was in UPPERCASE and the passwords are case insensitive in the system. Wow. Just wow.
PS: this is not the corporate travel booking system we use at Mozilla.
In my previous post ''What happened to all the pioneers in personal computing?'' I forgot a few notable companies.
As I was writing this second, post, Ars Technica published From Altair to iPad: 35 years of personal computer market share where they relate the 35 years from the Altair to the move to the iPad as a personal computing device.
The Commodore 64 is 30. The TRS-80 is 35. But what happened to all the pioneers of the Personal Computing era?
So what is left from the pioneers?
Am I missing anything?
Update: part deux
Firefox 16 uplift to Aurora is today. This version will have Accessibility enabled on Mac, finally, but you must either force enable or use VoiceOver. It should work for basic tasks, albeit there is some serious performance problems with VoiceOver I'm investigating.
Also, coming soon for Firefox 17: handling properly image maps.
I am happy to announce the upcoming book of my dear wife. A must read for all interested in intellectual property, in access to copyrighted materials and in development issues.
This book originates from a PhD thesis defended at the Graduate Institute of International and Development Studies, Geneva, Switzerland. It has been awarded "summa cum laude" mention.
Check, please, with your libraries whether they know about the book and advise them strongly to purchase it for the biggest good of the humanity :)
One thing puzzling with YouTube HTML5 support is the message "this video is currently unavailable" which could mean a lot of things. The actual translation is "we need to show you ads and you need Flash for that".
If should be noted that there is no problem on mobile platform, Android or iOS, the video is shown.
I don't know if you noticed, but when you connect a Nexus One or a Samsung phone (Gingerbread or ICS, tested with a SGS 2 or Galaxy Note) to your Mac, the phone isn't recognized as a camera.
There is a difference between the Nexus One (stuck to Gingerbread) and the Samsung. The Nexus is USB Mass Storage (ie the phone is seen as a USB disk) while the Samsung is MTP (a variant of PTP, the USB standard for still image cameras). But in both case, the MacOS digital camera support (Image Capture, iPhoto or Aperture) recognize it but do not show anything. Adobe Lightroom is in the same boat (I'm not sure if it uses the OS capability or reimplemented it). This is because Android butcher the implementation of the Design rules for Camera Filesystem. See Android bug 2960 where you'll notice that it was largely ignored by Google despite even having a patch.
For the Nexus One this does not prevent from manually copying the images. But Samsung.... one would think they would have fixed that, but obviously they didn't. To make things worse, Samsung doesn't use Mass Storage but MTP, which mean that there is no way to just copy files from the camera[1]. That last bit is utter fail.
Update (June 21st): from the comment, apparently I can set the Galaxy Note to be as USB Mass Storage. It is complicated, needs to be done manually, require disabling USB Debugging (it will do it for you, but not reenable it), etc. In short they turned something relatively simple to something overly complex and unfriendly. Worse, it is so many to reach the dialog where like on the Nexus One, you can tap to enable Mass Storage. The positive side is that you can't enable Mass Storage without unlocking the phone, which is a security feature.
[1] unless maybe you install some tool, but anything runs better without Samsung software
Some update about Firefox accessibility on Mac:
about:config
to force enable (bypass the white listing) or disable.Using about:config
: accessibility.force_disable
. This option has 3 values:
This also works on Windows (the value -1 is unused) and soon on Linux with atk (I have to finish it)
I hope to get more rolling before we uplift Aurora 16.
Pleasantly surpised we made it into the great LWN
Uff, it is done!!!
We started to work on the text support inside libcdr
already before the Libre Graphics Meeting in Vienna. We worked hard during the talks and the long evenings after having eaten some portions of Wienerschnitzl.
Now we are proud to announce that we managed to release yesterday libcdr-0.0.8
with "basic initial primitive [u]ncomplete" (further BIPU) text support. At the moment, we are supporting only a couple of parameters as a font face and font size and we are able to detect the encoding and produce a corresponding utf-8 string. Far from being perfect, it is nonetheless a milestone, because in the FOSS world, there was no support for CorelDraw text before.
We know that you prefer to look at nice pictures instead of reading bad text. So, this gives your heart's desires.
A simple document with text in CorelDraw 7:
The same document opened in a build of LibreOffice from yesterday:
At the moment, libcdr
is able to convert text in CorelDraw documents from versions 7 to 16. Nonetheless, we know already roughly how to read it in files of lower versions and we will add the support for next release. In the same way, we will extend our support of other text properties, like font colour, transparency, effects, paragraph alignments, character positions, etc.
How can I test it? All this goodness will be part of LibreOffice 3.6.0 release. You will be able to test the text support in the 3.6.0 beta2 pre-release. For the brave, any of the daily builds that are built from a code checkout after June 11th also include libcdr-0.0.8
and thus the text support in CorelDraw files.
As usual, this is a free and open source software project and, as such, it delights in developers that want to help. So, if you feel the itch, patches can be sent to libreoffice-dev
mailing list. And, do not forget to find a way to join the #libreoffice-dev
channel at irc.freenode.net
in order to meet other developers. We can promis you that you will feel at home in the LibreOffice community.
Yep, I deleted my LinkedIn account. Despite the fact that I got no value from it, the leak of 6.5M unsalted password hashes was just the icing on the cake. For so long they had a deficient SSL support, they ask to decrypt a captcha to login and lot of other stupidities. And their mobile app steal or leak personal info like your iPhone calendar.
I should have done that a long time ago. When they asked a reason I typed in "too dumb with security"
You know where you can find me.
As Sophie Gauthier announced in the language of Voltaire, LibreOffice was branched for the beta phase in view of the 3.6 release. This is a major step in order to bring the features we were working on during the last half a year to the end users. But, it is also oportunity to bring to the main codebase all the nifty nice features that were developed in feature branches and targeted for the next big release, presumably the 3.7.
It is this way that the first version of our new Microsoft Publisher import filter landed to the master. This filter is developed by Brennan Vincent from the University of Arizona in the frame of the Google Summer of Code. Although being a work in progress and supporting for the while only the Publisher 2003 file-format, the progress is spectacular. Brennan has been busy like a bee even long before the start of the program. After only two weeks from the official kick-off, we have a first (non-)release, libmspub-0.0.0.
And as the careful readers of this blog already know, an image speaks louder then thousand words, here are the pics:
A random document from the Internet opened in Microsoft Publisher 2003:
The same document opened in LibreOffice master build from yesterday:
With Valek Filippov, we have a lot of fun mentoring this project. If anybody of the distinguished readership wants to join this effort, the code of libmspub
lives in LibreOffice freedesktop.org repository. The patches can be sent to libreoffice-dev
mailing list. And, do not forget to find a way to join the #libreoffice-dev
channel at irc.freenode.net
in order to meet other developers.
You will never regret the decision to get involved in LibreOffice.
A very happy first week to my baby daughter Amélie! She was born last Sunday, May 27th, and she and her lovely mother are doing very well. During the day she is total cuteness, rainbows and unicorns, while at night she turns into a hungry monster! A cute monster, but still… :P
How to take a screenshot on an Android phone, a Google Nexus One in my case. I have to document it, because "home + power" like on an iPhone (or any iOS device) is far too complicated.
Warning: this post contain whole parts of ranting and sarcasm.
ddms
from the Android SDKKaboom. It crashes.
01:42:04 E/ddms: Failed to execute runnable (java.lang.ArrayIndexOutOfBoundsException: -1) org.eclipse.swt.SWTException: Failed to execute runnable (java.lang.ArrayIndexOutOfBoundsException: -1) at org.eclipse.swt.SWT.error(Unknown Source) at org.eclipse.swt.SWT.error(Unknown Source) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Unknown Source) at org.eclipse.swt.widgets.Display.runAsyncMessages(Unknown Source) at org.eclipse.swt.widgets.Display.readAndDispatch(Unknown Source) at com.android.ddms.UIThread.runUI(UIThread.java:517) at com.android.ddms.Main.main(Main.java:116) Caused by: java.lang.ArrayIndexOutOfBoundsException: -1 at org.eclipse.jface.viewers.AbstractTableViewer$VirtualManager.resolveElement(AbstractTableViewer.java:100) at org.eclipse.jface.viewers.AbstractTableViewer$1.handleEvent(AbstractTableViewer.java:70) at org.eclipse.swt.widgets.EventTable.sendEvent(Unknown Source) at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source) at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source) at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source) at org.eclipse.swt.widgets.Table.checkData(Unknown Source) at org.eclipse.swt.widgets.Table.cellDataProc(Unknown Source) at org.eclipse.swt.widgets.Display.cellDataProc(Unknown Source) at org.eclipse.swt.internal.gtk.OS._gtk_list_store_append(Native Method) at org.eclipse.swt.internal.gtk.OS.gtk_list_store_append(Unknown Source) at org.eclipse.swt.widgets.Table.setItemCount(Unknown Source) at org.eclipse.jface.viewers.TableViewer.doSetItemCount(TableViewer.java:217) at org.eclipse.jface.viewers.AbstractTableViewer.internalVirtualRefreshAll(AbstractTableViewer.java:661) at org.eclipse.jface.viewers.AbstractTableViewer.internalRefresh(AbstractTableViewer.java:635) at org.eclipse.jface.viewers.AbstractTableViewer.internalRefresh(AbstractTableViewer.java:620) at org.eclipse.jface.viewers.StructuredViewer$7.run(StructuredViewer.java:1430) at org.eclipse.jface.viewers.StructuredViewer.preservingSelection(StructuredViewer.java:1365) at org.eclipse.jface.viewers.StructuredViewer.preservingSelection(StructuredViewer.java:1328) at org.eclipse.jface.viewers.StructuredViewer.refresh(StructuredViewer.java:1428) at org.eclipse.jface.viewers.ColumnViewer.refresh(ColumnViewer.java:537) at org.eclipse.jface.viewers.StructuredViewer.refresh(StructuredViewer.java:1387) at com.android.ddmuilib.logcat.LogCatPanel$LogCatTableRefresherTask.run(LogCatPanel.java:1000) at org.eclipse.swt.widgets.RunnableLock.run(Unknown Source) ... 5 more
En voila. You still haven't taken a screenshot.
And for the record, I'm aware that Android 4.0 can do it, but Google still hasn't provided an update for the Nexus One (their first flagship device) and is unlikely to do it. That's not really encouraging into buying a newer device.
Update: upgraded to platforn tools version 11 and still the same problem.
For the last few weeks my spare cycles have been mostly spent on the Guacamayo Project; this is something that myself and Ross have been toying with for a while, and it’s probably time to say a bit about it.
In a gist Guacamayo is a specialised Linux distribution for networked multimedia devices; I say specialised, because the aim is not to produce yet another rehashed desktop distro with a bit of multimedia functionality on the side, but a system built from ground up for a pure multimedia experience.
The clearly defined focus allows us to do one thing in particular: we can ditch the traditional Linux desktop! The Guacamayo aim is to provide an intuitive gateway into a multi media world; the traditional desktop metaphor made up of workspaces, applications, documents and no end of toolbars and menus does nothing but stand in the way. Considering most of us have to put up with that sort of mess during working hours, I think we deserve better when it’s time to chill out.
Ditching the traditional Linux desktop has some other inherent benefits; we can forget about legacy technologies, not least the venerable X11 windowing system, and instead choose what makes best sense for creating that sort of user experience we are after.
So what are we doing:
Supported HW? We are not focusing on any HW in particular. Our aim is to create a distro that could be used on a broad variety of suitable HW. The current development is done using the Zotac zbox (Intel Atom) and the Beagleboard (Arm Cortex A8), and we fully intend to support Raspberry PI (eagerly awaiting HW).
We did our first release code named ‘See No Evil, Hear All You Want!’, aka 0.2, last week. As the name suggests, this is a limited functionality audio-only release, to get folk interested to come along and start testing and contributing. The next stable release is planed to include MEX running under X11, as a stepping stone toward a pure OpenGL system beyond that.
If you are interested, the source is here, you can drop by #guacamayo on Freenode, or follow @MetaGuacamayo on Twitter.
Powered by Planet!
Last updated: February 09, 2016 11:02 PM
Copyright (C) 1998-2005, the AbiSource community. All rights reserved.
AbiSource, AbiSuite, and AbiWord are trademarks of Dom Lachowicz. All other product names, company names, or logos cited herein are property of their respective owners.
Site feedback to webmaster@abisource.com