Archive

Posts Tagged ‘Eclipse’

Eclipse Project Set Editor

January 15, 2013 2 comments

psfeditorAn Eclipse Project Set file (.psf) enables quick export and import of files from a repository like Git, SVN etc. Eclipse currently supports exporting files in a repository as a Project Set file. An editor for an existing PSF file is currently missing. The means updating an existing PSF file (for example, adding a new project, removing an existing project etc.) means editing PSF XML by hand!

The Project Set Editor provides a simple user interface to view and edit PSF files (similar to Manifest Editor or Target Definition Editor). The editor also allows directly importing the artifacts in a PSF file.

You can find the project at Eclipse Labs: http://code.google.com/a/eclipselabs.org/p/psfeditor/

Doors are open for testers and committers.

 

Tags: , ,

2 Fast thrice Furious

August 3, 2011 1 comment

Preparing to make the initial code contribution for RMF, we ran our RIF/ReqIF metamodels through several performance tests. To start with, we tested the load and save times of RIF files based on some industry samples. To get some comparison data, we generated XMI files using the same data held in RIF XML files and tested the load/save time against it. The results are quite promising.

Before we go into the details of the tests, its better to define two components involved in our test comparisons.

  • The customized RIF XML loader (a.k.a RMF Loader) and serializer (a.k.a RMF Serializer) for loading/saving OMG RIF XML files into RIF Ecore metamodel (read more on the metamodel implementation here).
  • The default RIF XMI loader and serializer for  loading/saving RIF XMI files into RIF Ecore model (this is not in the scope of RMF. We use this only to get some comparison).

Here are some highlights from our tests.

  • A 32MB RIF XML file is loaded in 14.4 seconds by RMF loader where as the same data in XMI format is loaded by the default EMF XMI loader in 22.2 seconds (and 70 mins(!!) without any optimizations to the XMI loader)
  • The average time taken to load per MB of data from RIF XML is 0.5 seconds, whereas RIF XMI takes 1.63 seconds per MB. For save, average time taken per MB of data to RIF XML is 0.09 seconds, whereas RIF XMI takes 1.22 seconds per MB
  • The load and save time for RIF XML files by RMF loader/serializer increases linearly with size

Tags: , , , ,

Dissecting RIF/ReqIF metamodel

July 29, 2011 9 comments


RIF/ReqIF is the new OMG standard for requirements interchange. RMF (currently in proposal phase) provides an EMF Ecore based metamodel implementation for the RIF/ReqIF XML format. The metamodel is a clean implementation of the format without any “XML noise”. The Ecore metamodel also conforms to the CMOF metamodel delivered by OMG (as it has been derived from it). The metamodel reads/writes RIF/ReqIF data in conformance with the RIF/ReqIF XML Schema.

The challenge in the Ecore based RIF/ReqIF implementation was the customization of loaders and serializers to make them RIF/ReqIF XML Schema conformant. EMF provides different ways of customizing the XML output.

1. Using ExtendedMetadata annotations.

2. Implementing new XMLLoad, XMLSave and XMLHelper.

Both approaches are quite tricky to implement when the expected XML output has structural differences compared to the Ecore metamodel. For example, the XML output has wrapper elements for lists (ELists) in the Ecore metamodel.

We went in for a third approach.

We imported RIF/ReqIF XML schema using EMF importer to create a RIF/ReqIF XML Ecore model (Anyone who has done this, knows it’s an “ugly” metamodel that EMF generates here). The next step was to create the real RIF/ReqIF metamodel, by importing RIF/ReqIF CMOF file. We then wrote a generic (of course reusable) Ecore XML to Ecore converter to do a model to model transformation in both directions. The whole processing is cleanly hidden in a new EMF Resource implementation such that the user hardly knows anything about these.

The following are the advantages we saw in the approach.

1. You don’t do customizations at a XML level, but at a higher EMF model API level. That is, you don’t hack some XML SAX events, but work with familiar EMF APIs.

2. The generic Ecore converter has enough hooks to plugin in the transformations.

3. Easy maintenance as the customizations are available in one place.

The drawback of this approach could be the processing involved in the model to model transformation. However, the highly optimized model to model transformation we implemented doesn’t make it all that slow. Our performance tests proves this (which would be the topic of my next blog post). It is infact upto 300 times faster than having the same data in the default XMI format (Yes I meant 300! Ed Merks is going to get back to me on this I hope ;) )

Edit:

Ed did get back to me. With the performance optimizations applied to XMIResource loader to use cache for Intrinsic Ids and URIs, the performance improved dramatically. The XMI resource loader now takes only 22,2s to load a 32MB sample, compared to the earlier 70mins. I am glad that our RIF XMLResource however takes only 14,4s for the same contents.

Tags: , , , , ,

Requirements Modeling Framework (RMF)

July 18, 2011 2 comments

Over the last few months some guys at itemis and Düsseldorf University have been working closely to bring out a solution to the open in an effort to lessen the big gap we have currently in Eclipse Ecosystem in the area of Requirements Management (RM). Requirements Modeling Framework (RMF) has been proposed to Eclipse Foundation as an open source project under Model Development Tools Project.

Scope of RMF

The scope of the project, as described in the proposal, is to provide an implementation for the OMG ReqIF standard (just as the Eclipse UML2 project provides an EMF based metamodel for OMG UML). Requirements management tools could then base their implementation on the provided ReqIF metamodel. It is also in the scope of the project to deliver a requirements authoring tool, again based on the ReqIF metamodel and optionally a generic traceability solution.

Significance of ReqIF

ReqIF (Requirements Interchange Format) is an open standard from OMG for requirements exchange (I had blogged a while ago on how we arrived at ReqIF (earlier called RIF)). Many tools like DOORS already support a snapshot export to this format. Having an EMF based metamodel for ReqIF, opens up the Eclipse framework for integration and new tool development in the area of Requirements Engineering.

ReqIF in a nutshell


ReqIF offers a generic metamodel to capture requirements (The generic nature is at times highly criticized. More on that later). The figure above shows a bird’s-eye view of the metamodel. The metamodel allows creation of requirement types (SpecType) with different attributes types and also instances for it (SpecObject). Since ReqIF also carries the metadata (in the form of SpecTypes), it makes it possible for any tool that understands ReqIF to process the data.

ReqIF also allows creating hierarchy for requirements, grouping them and controlling user access. To support rich text, the metamodel reuses XHTML.

Generic nature and “meta”ness of ReqIF

Anyone expecting a requirements metamodel might be surprised at first when they have a look at ReqIF. You hardly find a term called “requirement” in there. ReqIF is more at a higher meta-level, that is M1. It has been designed this way for a reason.

Requirements Management is an evolving field and until recently never crossed company boundaries. With tighter collaboration between partner companies the benefits of applying RM across company borders became known. The field being highly dominated by commercial products gave barely any chance for standardization. A group of companies in the automotive field, realized that with such a diversity existing hardly any unification is possible. They gave birth to this generic requirements interchange format which was later adopted by OMG and standardized.

The generic structure of ReqIF allows companies to use the tools of their choice to do requirements management and use ReqIF as the standard exchange mechanism and even do round trips. ReqIF is not the only standard to allow this generic nature in the field of Requirements Engineering. DOORS, for example, has an extensible database allowing users to add/delete attributes. The changes to the schema are often communicated to the partners by external means to replicate the changes in their database. Since ReqIF carries the meta-information with it, the changes are migrated automatically across tools/company boundaries. Transmitting the meta-information in ReqIF could however be controlled by tooling. This nevertheless would beat the purpose of ReqIF.

It is also questionable, why such a generic model is required when we have metamodels like UML or EMF already available. The reason for this is very much the same as why EMF co-exists with UML. ReqIF is not a general purpose modeling language like UML or EMF, but more focused on requirements domain.

Integrating ReqIF

RMF provides ReqIF as an EMF based metamodel. This could be used by tool vendors as the internal tool model or an export model by means of model to model transformation. The provided loaders and serializers make sure that the ReqIF file is read/written according to the ReqIF XML Schema.

Using ReqIF natively brings all the advantages of ReqIF to the tooling as well. For example, ReqIF based tools could model the requirements domain of the company within the tooling and share it with partners along with the instance data.

What next?

The initial code contribution is planned by early August 2011 and will be made available as soon as the IP review at Eclipse is complete. If you have any suggestions/comments about RMF, would like to contribute or like to be listed as an interested party, please provide it here.

Images from FreeDigitalPhotos.net
Tags: , , , ,

10 common EMF mistakes

May 25, 2011 16 comments

1. Treating EMF generated code as only an initial code base

EMF is a good startup kit for introducing MDSD (Model-Driven Software Development) in your project. Anyone who has been introduced to EMF and had their first code generated using EMF will for sure be impressed by the short turn around time needed to have an initial version of your model based application up and running. For many, the thrill ends as soon as you need to change the behaviour of generated code and have to dig into it and understand it. It is complex, but only complex as any other similar sized framework. Many projects I have come across make a mistake at this point. They start treating the EMF generated code as an initial code base and start making it dirty by modifying it by hand. Adding @generated NOT tags in the beginning and not anymore later. At this point you part ways with “EMF way of doing things” and also with MDSD.

The customizations mostly start with the generated editor. This is quite acceptable as the editor is intended as an initial “template” and it expects you to customize it. However, this is not the case with model and edit projects. They have to be in sync with your model and this needs making some strong decisions, specifically –  “I will not touch the generated code”. Follow EMF recommended way of changing generated code or use Dependency Injection.

2. Model modification without EMF Commands

In EMF you deal a lot with model objects. Almost any UI selection operation hands over a model object to you. It is so easy to fall into the trap of changing the state of the model objects directly once you have access to it, either using the generated model API or using reflective methods. Problems due to such model updates are detected only later, mostly when Undo/Redo operations stops working as expected.

Use EMF Commands! Modify your model objects only using commands. Either directly using EMF commands, extending them or creating your own commands. When you need to club many operations as a single command use CompoundCommands. Use ChangeCommand when you are up to making a lot of model changes in a transactional way.

3. Not reacting to notifications/Overuse of notifications

EMF notification is one the most powerful features of the framework. Any change to model objects are notified to anyone who is interested in knowing about it. Many projects decide to ignore listening to model changes and react to model changes by traversing the model again to look for changes or making assumptions about model changes and combining the behaviour into UI code.

When you want to react to a model change, don’t assume that you are the only one going to originate that change. Always listen to model changes and react accordingly. Don’t forget the handy EContentAdapter class.

On the contrary, some projects add a lot of listeners to listen to model changes without applying proper filters. This could greatly slow down your application.

4. Forgetting EcoreUtils

Before writing your own utility functions, keep an eye on EcoreUtils class. Mostly, the generic function you are trying to implement is already there.

5. Leaving Namespace URI as default

When you create an Ecore model, EMF generates a default Prefix and Namespace URI for you. Mostly this is left as it is without realizing that this is the most important ID for your new model. Only when newer versions of the model needs to be released later, people start looking out for meaningful URIs.

Change the default to a descriptive one before generating code. For example, including some version information of the model.

6. UI getting out of sync with model

The EMF generated editor makes use of Jface viewers. These viewers are always kept in sync using the generated ItemProviders. However these viewers cannot always satisfy the different UI requirements. You might have to use other SWT components or Eclipse forms. At this juncture, many implementors mix UI and model concerns together and in turn lose the sync between the model and UI.

Although, you could go ahead and implement your custom viewers, the easier way would be to use EMF Databinding.

7. Relying on default identification mechanism

Every EMF object is identified by a Xpath like URI fragment.  For example, //@orders.1/@items.3. This is the default referencing mechanism within an EMF model or across models. If you look closely you could see that the default mechanism is based on feature names and indexes. The default implementation could turn dangerous if the index order changes and the references are not updated accordingly.

Extend this to uniquely identify your model object or simply assign an intrinsic ID to your model object. EMF will then use these IDs to reference your objects.

8. Forgetting reflective behavior

Many consider the reflective features of EMF as an advanced topic. It is not as complex as it seems.

Instead of depending fully on generated APIs, think about creating generic reusable functions which makes use of the meta information every EObject carries. Have a look at the implementation of EcoreUtil.copy function to understand what I mean.

9. Standalone? Why do I bother

A misunderstanding most people have is that EMF is Eclipse specific. EMF can well run outside Eclipse as “stand-alone” applications.

While developing applications based on EMF you should design your applications such that the core is not tied to Eclipse. This would allow your applications to be run even in a non OSGi server-side environment.

10. Not reading the EMF bible


This is the biggest mistake a new comer could make. The book is a must read for anyone who starts working with EMF.

Order now (Hope this fetches me a beer at the next EclipseCON Europe ;))

Tags: ,

(e)Wiki – A model based Wiki framework

October 20, 2010 3 comments

1. What’s Wiki with a (e)?

(e)Wiki is a Wiki markup generation framework based on Mylyn WikiText and EMF. The framework allows to read and write different Wiki markup formats – this time, the model driven way. If you have an existing model based tool chain, the framework could easily fit in there to generate some quick documentation, without worrying about markups. You could reuse all the WikiText markup parsers and document builders, combined with the power of EMF features like EMF persistence, EMF notification etc.

The framework was developed as part of a customer project to generate Wiki documentation out of EMF based domain models. The framework is currently not open sourced and made available to public. Talks are on with the customer on this however.

The article gives an overview about the framework and its features. The intention of the article is also to demonstrate the extensibility of two powerful Eclipse frameworks – Mylyn WikiText and EMF.

2. Architecture

(e)Wiki is an addon to Mylyn WikiText. WikiText is a framework which supports parsing/editing Wiki markup formats like MediaWiki, Textile, Confluence, TracWiki and TWiki and writing them as HTML, Eclipse Help, DocBookDITA and XSL-FO. WikiText however doesn’t have an internal data model for documents (like DOM for XML). (e)Wiki adds this missing layer to the WikiText framework. Instead of using a POJO data model, (e)Wiki uses a powerful EMF based metamodel. Using (e)Wiki you could read all the above markup formats and generate a (e)WikiModel based on EMF. The model could then be written out using any of the WikiText document builders.

3. (e)WikiModel

(e)WikiModel is an EMF based Wiki metamodel. It is a generic metamodel for any Wiki Language.

(Only partial model is shown above)

(e)WikiModel is not aimed to be used as a DSL for your domain. It is intended to capture the documentation aspects of your domain model. To better describe semantic information use a DSL of your own using a framework like Xtext.

4. Features

(e)Wiki currently delivers the following:

  • A metamodel for Wiki called (e)WikiModel
  • (e)WikiEditor to view/edit (e)Wiki files
  • Previewing of rendered (e)Wiki content
  • UI extensions to convert existing markups to (e)WikiModel
  • Generating Textile and Redmine markup (together with HTML and Docbook as already supported by WikiText)
  • Feature to split generated Textile and Redmine markup files into subpages
  • Add Table of Contents to generated output

5. Working with (e)Wiki

5.1. Creating a (e)WikiModel

A (e)Wiki instance could be created either from Eclipse UI or programatically.

5.1.1. Creating from UI

Create a markup file within the eclipse workspace.

Right click the markup file and invoke eWiki -> Generate eWiki.

An (e)Wiki file is created in the same folder as selected file with extension .ewiki.

5.1.2. Creating with code

WikiText is based on “Builder” design pattern. (e)Wiki uses the same pattern and adds a new DocmentBuilder, the EWikiDocumentBuilder. The snippet below shows how to convert existing markup (Textile in this case) to (e)Wiki.


final String markup = "h1. Quisque"

final ResourceSet resourceSet = new ResourceSetImpl();
final Resource eWikiResource = resourceSet.createResource (URI.createFileURI("..."));

MarkupParser parser = new MarkupParser();
parser.setMarkupLanguage(new TextileLanguage());
EWikiDocumentBuilder eWikiDocumentBuilder = new EWikiDocumentBuilder(eWikiResource);
parser.setBuilder(eWikiDocumentBuilder);
parser.parse(markup);

eWikiResource.save(Collections.EMPTY_MAP);

The snippet below shows how to create a (e)WikiModel using model APIs. If you have worked with EMF generated model API code, this is no different, except that (e)Wiki adds additional convenient factory methods.

final ResourceSet resourceSet = new ResourceSetImpl();
final Resource eWikiResource = resourceSet.createResource (URI.createFileURI("..."));
Document document = EwikiFactory.eINSTANCE.createDocument();
document.setTitle("Lorem ipsum");

EwikiFactory.eINSTANCE.createHeading(document, 1, "Quisque");

Paragraph paragraph = EwikiFactory.eINSTANCE.createParagraph();
document.getSegments().add(paragraph);
EwikiFactory.eINSTANCE.createText(paragraph, "Lorem ipsum dolor sit amet, consectetur adipiscing elit.");

BullettedList bullettedList = EwikiFactory.eINSTANCE.createBullettedList();
document.getSegments().add(bullettedList);
EwikiFactory.eINSTANCE.createListItem(bullettedList,"Mauris");
EwikiFactory.eINSTANCE.createListItem(bullettedList,"Etiam");

eWikiResource.getContents().add(document);
eWikiResource.save(Collections.EMPTY_MAP);

5.2. (e)WikiEditor

The (e)WikiEditor provides a tree based editor and a preview tab.

5.2.1. Tree editor tab

The tree editor provides a tree view of the (e)Wiki file. Although the editor could be used to change the file, it is rare that rich text would be edited in this way.

5.2.2. Preview tab

The preview tab provides a preview of the (e)Wiki file as rendered by your default browser.

 

5.3. Generating output

The (e)Wiki file could be converted to HTML, Docbook, Textile and Redmine markup formats.

5.3.1. Generating output from UI

Right clicking the (e)Wiki file brings up the context menu for conversions.

5.3.2. Generating output from Code

You could use any of the existing DocumentBuilders in WikiText to generate output from a (e)WikiModel. The snippet below shows how to convert an (e)Wiki instance to HTML programatically using the WikiText HTMLDocumentBuilder.

final IFile eWikiFile = ...;
ResourceSet resourceSet = new ResourceSetImpl();
final Resource wikiResource = resourceSet.getResource(URI.createFileURI(eWikiFile.getRawLocation().toOSString()), true);
StringWriter writer = new StringWriter();

EWikiParser parser = new EWikiParser();
parser.setMarkupLanguage(new EWikiLanguage());
parser.setBuilder(new HTMLDocumentBuilder(writer));
parser.parse(wikiResource);

final String htmlContent = writer.toString();

Provisioning your Target Platform as local p2 site

October 9, 2010 12 comments

Provisioning your target platform from a p2/update site is a gem of a feature that was released with Eclipse 3.5. Chris called it “fantasy come true!” and so did many others. You could read the details on how this works from this article by Chris. (On the other hand, if you haven’t used target platforms at all, stop doing any further eclipse plugin development and read about it here first, right from the target platform expert Ekke).

Introduction

Provisioning your target platform from a p2 site allows you to basically download any eclipse flavor you like, and with the click of a button set up your target platform. PDE downloads all the plugins required for your target platform automatically from different software sites based on your target definition and builds your workspace. Although this is a great feature, using this in your workflow has some shortcomings.

  1. If your target platform is large (which is mostly the case), this would end up in a lot of bandwidth usage every time the bundles are downloaded from different software sites. Also, if you do not have high speed internet or you have restricted internet access, the time to initialize the target platform could be very long.
  2. Not all bundles are available from p2/update sites. Although p2 marketing has been quite successful recently, many plugin providers still doesn’t host their products as p2/update sites. Although, you could have such plugins as local folders or in a shared network folder, this takes away the power of provisioning.
  3. Many p2/update sites don’t believe in archiving older versions and continue providing them as p2/update sites. Hence you have no guarantee that your platforms based on some older bundles will work forever.
  4. Many development projects version their target platforms and maintains them in versioning systems like SVN. This is a prerequisite to reconstruct older target platforms. This, however, is not possible with the approach above.

If you have a closer look, you could see that many of the limitations above stems from the fact that the software sites referenced are not local, nor within your control. All of them could be avoided if you provision your target platform as a local p2/update site. This means, instead of you downloading bundles from a public software site, you (and your team) downloads them from a local p2/update site that you have set up with all your required target platform plugins/features. In this article, I describe a workflow in setting up such a local p2/update site for your target platform.

1. The aggregation task

The first step would be to aggregate all the plugins and features required by your target platform. You could easily do this by creating a target definition file which references the software site of your bundle providers using “New -> Target Definition“. If your bundle provider doesn’t have an p2/update site, reference them from a local download folder.

You will never share/distribute this “setup” target file

A sample target definition file could look like this.

If you want your target to support multiple platforms, make sure to check this checkbox while you add a software site.

2. Testing the target

Using the target definition editor, set the target by clicking “Set as Target Platform“.

Additionally, you could set the target platform in “Preferences -> Plug-in Development -> Target Platform” and selecting the newly created target. If all the projects in your workspace builds fine, then you have setup the target platform correctly. Else repeat Step 1 to add the missing plugins and the workspace errors vanish.

3. Creating a new target platform feature

Using the feature project wizard, create a new feature for the target platform.

Make sure to properly name and version the feature.

In the plugin selection page, the plugins listed are the ones in your target platform together with your workspace plugin projects. Select all plugins except the ones in your workspace.

4. Creating a p2 site for target platform

You could do this either using PDE or using Buckminster.

If you want your target platform to support multiple development platforms (OSX, Linux, Windows etc.) use Buckminster (unfortunately, PDE had some export issues doing this)

4.1. Creating p2 update site using PDE

Create a new update site project using the update site project wizard.

Create a proper category and add the target platform feature you created earlier as child, using “Add Feature…“.

Build the update site using “Build/Build All“.

If the build is successful, the update site project could look like this.

4.2 Creating p2 site using Buckminster

Install Buckminster from update site http://download.eclipse.org/tools/buckminster/updates-3.6 (for Helios).

Create a buckminster.properties file in your target platform feature project.

## buckminster.properties ##

#Where all the output should go

buckminster.output.root=${user.home}/project/site

# Where the temp files should go

buckminster.temp.root=${user.home}/project/tmp

# How .qualifier in versions should be replaced

qualifier.replacement.*=generator:lastRevision

target.os=*

target.ws=*

target.arch=*

Right click the target platform feature project and select “Buckminster -> Invoke Action…

Select buckminster.properties file created using “Workspace” button followed by “site.p2” from “Matching items” list

Select “OK” to close the dialog.

The p2 site would be created in feature folder pointed by buckminster.output.root in your buckminster.properties file.

You do not have to create a update site project (as you did in the PDE approach) while using Buckminster

5. Deploying the target platform

Identify a server space in your local network where the p2 site could be deployed (preferably a webserver). Copy the p2 site contents to this location.

6. Creating the final target definition

Create a new target file and point it to the newly hosted p2 site (similar to Step 1)

7. Testing the final target platform from local p2 site

Open the target file and set the target using “Set as Target Platform” as before. The bundles from the local p2 site will now be downloaded and your workspace built again. If you have zero errors, you have succeeded in setting up a p2 update site for your target platform. Distribute the final target file to your development team and they could now set their target platform in the same way.

8. Managing different versions of target platforms

You could provision multiple versions of target platforms at the same local p2 site. All you need to do is to create a new feature for the new target platform, add it to the update site site.xml, build and deploy the target platform as before and distribute the updated target file.

Follow

Get every new post delivered to your Inbox.