I Use This!
Very High Activity


Analyzed 22 days ago. based on code collected 25 days ago.
Posted about 17 hours ago
What is an Eclipse DemoCamp and why should I organise one?
Posted about 17 hours ago
The next major release of the Eclipse Oxygen is coming up on June 28 and, it means the start of this year’s Eclipse DemoCamps season. If you or your colleagues are considering a DemoCamp for 2017, we would like to help! What’s a DemoCamp? You may be ... [More] asking yourself what the heck a DemoCamp is and why should you care? Eclipse DemoCamps are typically 1-day or even evening events organized by Eclipse community members all over the world. The organizers bring together a set of expert speakers and attendees from their local community. In other words, it’s a free event where you get to meet fellow Eclipsians and learn from each other in the form of demos/talks about Eclipse technology. How do I get started? This is the best part, wherever you are, you can organize an Eclipse DemoCamp! You choose the place, set the time, organize the venue (maybe a local pub or company office), provide a screen and projector, and arrange for refreshments. To tell us that you are planning an Eclipse DemoCamp: Send us an email on democamps@eclipse.org to ask about support, speaker ideas or possible goodies Add it to the DemoCamp 2017 wiki page To add it, simply create a page with the program and venue information. And if you use another service like Meetup, just add a link to it from the Eclipse wiki. We will be pleased to list it on events.eclipse.org. How does Eclipse Foundation help? We, as the Eclipse Foundation, will participate to the cost of food, beverages, and room rental up to $300. We encourage organizers to find outside corporate sponsors to help organize their event. Sponsors usually contribute a certain amount, food or provide the space. Please acknowledge your sponsors on the DemoCamp & Hackathon wiki page and at the event itself. We will help you promote it through the Eclipse Foundation’s social media network and website. To read more about organizing an event, visit this page “Organise an Eclipse DemoCamp or Hackathon“. Eclipse Foundation staff also tries to attend the DemoCamps. This is obviously not always possible, but who knows… we could be coming to yours! In 2016, DemoCamps took place in 19 different cities, from 10 different countries: Austria, Canada, China, Germany, Guatemala, Hungary, India, Norway, Poland, and Switzerland! We need you to reach new places in 2017 and that place could be near you! Looking forward to hear from you [Less]
Posted 2 days ago
This post is an introduction to the Vert.x-Swagger project, and describe how to use the Swagger-Codegen plugin and the SwaggerRouter class. Eclipse Vert.x & Swagger Vert.x and Vert.x Web are very convenient to write REST API and especially the ... [More] Router which is very useful to manage all resources of an API. But when I start a new API, I usually use the “design-first” approach and Swagger is my best friend to define what my API is supposed to do. And then, comes the “boring” part of the job : convert the swagger file content into java code. That’s always the same : resources, operations, models… Fortunately, Swagger provides a codegen tool : Swagger-Codegen. With this tool, you can generate a server stub based on your swagger definition file. However, even if this generator provides many different languages and framework, Vert.X is missing. This is where the Vert.x-Swagger project comes in. The project Vert.x-Swagger is a maven project providing 2 modules. vertx-swagger-codegen It’s a Swagger-Codegen plugin, which add the capability of generating a Java Vert.x WebServer to the generator. The generated server mainly contains : POJOs for definitions one Verticle per tag one MainVerticle, which manage others APIVerticle and start an HttpServer. the MainVerticle use vertx-swagger-router vertx-swagger-router The main class of this module is SwaggerRouter. It’s more or less a Factory (and maybe I should rename the class) that can create a Router, using the swagger definition file to configure all the routes. For each route, it extracts parameters from the request (Query, Path, Header, Body, Form) and send them on the eventBus, using either the operationId as the address or a computed id (just a parameter in the constructor). Let see how it works For this post, I will use a simplified swagger file but you can find a more complex example here based on the petstore swagger file Generating the server First, choose your swagger definition. Here’s a YAML File, but it could be a JSON file : Then, download these libraries : swagger-codegen-cli vertx-swagger-codegen Finally, run this command java -cp /path/to/swagger-codegen-cli-2.2.2.jar:/path/to/vertx-swagger-codegen-1.0.0.jar io.swagger.codegen.SwaggerCodegen generate \ -l java-vertx \ -o path/to/destination/folder \ -i path/to/swagger/definition \ --group-id your.group.id \ --artifact-id your.artifact.id For more Information about how SwaggerCodegen works you can read this https://github.com/swagger-api/swagger-codegen#getting-started You should have something like that in your console: [main] INFO io.swagger.parser.Swagger20Parser - reading from ./wineCellarSwagger.yaml [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/Bottle.java [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/CellarInformation.java [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/BottlesApi.java [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/BottlesApiVerticle.java [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/InformationApi.java [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/InformationApiVerticle.java [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/swagger.json [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/MainApiVerticle.java [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/vertx-default-jul-logging.properties [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/pom.xml [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/README.md [main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/.swagger-codegen-ignore And this in your destination folder: What have been created ? As you can see in 1, the vertx-swagger-codegen plugin has created one POJO by definition in the swagger file. Example : the bottle definition In 2a and 2b you can find : an interface which contains a function per operation a verticle which defines all operationId and create EventBus consumers Example : the Bottles interface Example : the Bottles verticle … and now ? Line 23 of BottlesApiVerticle.java, you can see this BottlesApi service = new BottlesApiImpl(); This line will not compile until the BottlesApiImpl class is created. In all XXXAPIVerticles, you will find a variable called service. It is a XXXAPI type and it is instanciated with a XXXAPIImpl contructor. This class does not exist yet since it is the business of your API. And so you will have to create these implementations. Fine, but what if I don’t want to build my API like this ? Well, Vert.x is unopinionated but the way the vertx-swagger-codegen creates the server stub is not. So if you want to implement your API the way you want, while enjoying dynamic routing based on a swagger file, the vertx-swagger-router library can be used standalone. Just import this jar into your project : You will be able to create your Router like this : FileSystem vertxFileSystem = vertx.fileSystem(); vertxFileSystem.readFile(“YOUR_SWAGGER_FILE“, readFile -> { if (readFile.succeeded()) { Swagger swagger = new SwaggerParser().parse(readFile.result().toString(Charset.forName(“utf-8”))); Router swaggerRouter = SwaggerRouter.swaggerRouter(Router.router(vertx), swagger, vertx.eventBus(), new OperationIdServiceIdResolver()); […] } else { […] } }); You can ignore the last parameter in SwaggerRouter.swaggerRouter(...). As a result, addresses will be computed instead of using operationId from the swagger file. For instance, GET /bottles/{bottle_id} will become GET_bottles_bottle-id Conclusion Vert.x and Swagger are great tools to build and document an API but using both in the same project can be painful. The Vert.x-Swagger project was made to save time, letting the developers focusing on business code. It can be seen as an API framework over Vert.X. You can also use the SwaggerRouter in your own project without using Swagger-Codegen. In future releases, more information from the swagger file will be used to configure the router and certainly others languages will be supported. Though Vert.x is polyglot, Vert.x-Swagger project only supports Java. If you want to contribute to support more languages, you’re welcome :) Thanks for reading. [Less]
Posted 4 days ago
JBoss Tools 4.4.4 and Red Hat JBoss Developer Studio 10.4 for Eclipse Neon.3 are here waiting for you. Check it out! Installation JBoss ... [More] Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this: java -jar jboss-devstudio-.jar JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more: This release requires at least Eclipse 4.6.3 (Neon.3) but we recommend using the latest Eclipse 4.6.3 Neon JEE Bundle since then you get most of the dependencies preinstalled. Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio". For JBoss Tools, you can also use our update site directly. http://download.jboss.org/jbosstools/neon/stable/updates/ What is new? Our main focus for this release was improvements for container based development and bug fixing. Improved OpenShift 3 and Docker Tools We continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here. OpenShift Server Adapter enhanced flexibility OpenShift server adapter is a great tool that allows developers to synchronize local changes in the Eclipse workspace with running pods in the OpenShift cluster. It also allows you to remote debug those pods when the server adapter is launched in Debug mode. The supported stacks are Java and NodeJS. As pods are ephemeral OpenShift resources, the server adapter definition was based on an OpenShift service resource and the pods are then dynamically computed from the service selector. This has a major drawback as it allows to use this feature only for pods that are part of a service, which may be logical for Web based applications as a route (and thus a service) is required in order to access the application. So, it is now possible to create a server adapter from the following OpenShift resources: service (as before) deployment config replication controller pod If a server adapter is created from a pod, it will be created from the associated OpenShift resource, in the preferred order: service deployment config replication controller As the OpenShift explorer used to display OpenShift resources that were linked to a service, it has been enhanced as well. It now displays resources linked to a deployment config or replication controller. Here is an example of a deployment with no service ie a deployment config: So, as an OpenShift server adapter can be created from different kind of resources, the kind of associated resource is displayed when creating the OpenShift server adapter: Once created, the kind of OpenShift resource adapter is also displayed in the Servers view: This information is also available from the server editor: Security vulnerability fixed in certificate validation database When you use the OpenShift tooling to connect to an OpenShift API server, the certificate of the OpenShift API server is first validated. If the issuer authority is a known one, then the connection is then established. If the issuer is an unknown one, a validation dialog is first shown to the user with the details of the OpenShift API server certificate as well as the details of the issuer authority. If the user accepts it, then the connection is established. There is also an option to store the certificate in a database so that next time a connection is attempted to the same OpenShift API server, then the certificate will be considered valid an no validation dialog will be show again. We found a security vulnerability as the certificate was wrongly stored: it was partially stored (not all attributes were stored) so we may interpret a different certificate as validated where it should not. We had to change the format of the certificate database. As the certificates stored in the previous database were not entirelly stored, there was no way to provide a migration path. As a result, after the upgrade, the certificate database will be empty. So if you had previously accepted some certificates, then you need to accept them again and fill the certificate database again. CDK 3 Server Adapter The CDK 3 server adapter has been here for quite a long time. It used to be Tech Preview as CDK 3 was not officially released. It is now officially available. While the server adapter itself has limited functionality, it is able to start and stop the CDK virtual machine via its minishift binary. Simply hit Ctrl+3 (Cmd+3 on OSX) and type CDK, that will bring up a command to setup and/or launch the CDK server adapter. You should see the old CDK 2 server adapter along with the new CDK 3 one (labeled Red Hat Container Development Kit 3). All you have to do is set the credentials for your Red Hat account and the location of the CDK’s minishift binary file and the type of virtualization hypervisor. Once you’re finished, a new CDK Server adapter will then be created and visible in the Servers view. Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicable environment. OpenShift Container Platform 3.5 support OpenShift Container Platform (OCP) 3.5 has been announced by Red Hat. JBossTools 4.4.4.Final has been validated against OCP 3.5. OpenShift server adapter extensibility The OpenShift server adapter had long support for EAP/Wildfly and NodeJS based deployments. It turns out that it does a great deal of synchronizing local workspace changes to remote deployments on OpenShift which have been standardized through images metadata (labels). But each runtime has its own specific. As an example, Wildfly/EAP deployments requires that a re-deploy trigger is sent after the files have been synchronized. In order to reduce the technical debt and allow support for other runtimes (lots of them in the microservice world), we have refactored the OpenShift server adapter so that each runtime specific is now isolated and that it will be easy and safe to add support for new runtime. For a full in-depth description, see the following wiki page. Pipeline builds support Pipeline based builds are now supported by the OpenShift tooling. When creating an application, if using a template, if one of the builds is based on pipeline, you can view the detail of the pipeline: When your application is deployed, you can see the details of the build configuration for the pipeline based builds: More to come as we are improving the pipeline support in the OpenShift tooling. Update of Docker Client The level of the underlying com.spotify.docker.client plug-in used to access the Docker daemon has been upgraded to 3.6.8. Run Image Network Support A new page has been added to the Docker Run Image Wizard and Docker Run Image Launch configuration that allows the end-user to specify the network mode to use. A user can choose from Default, Bridge, Host, None, Container, or Other. If Container is selected, the user must choose from an active Container to use the same network mode. If Other is specified, a named network can be specified. Refresh Connection Users can now refresh the entire connection from the Docker Explorer View. Refresh can be performed two ways: using the right-click context menu from the Connection using the Refresh menu button when the Connection is selected Server Tools API Change in JMX UI’s New Connection Wizard While hardly something most users will care about, extenders may need to be aware that the API for adding connection types to the &aposNew JMX Connection&apos wizard in the &aposJMX Navigator&apos has changed. Specifically, the &aposorg.jboss.tools.jmx.ui.providerUI&apos extension point has been changed. While previously having a child element called &aposwizardPage&apos, it now requires a &aposwizardFragment&apos. A &aposwizardFragment&apos is part of the &aposTaskWizard&apos framework first used in WTP’s ServerTools, which has, for a many years, been used throughout JBossTools. This framework allows wizard workflows where the set of pages to be displayed can change based on what selections are made on previous pages. This change was made as a direct result of a bug caused by the addition of the Jolokia connection type in which some standard workflows could no longer be completed. This change only affects adopters and extenders, and should have no noticable change for the user, other than that the below bug has been fixed. Hibernate Tools Hibernate Runtime Provider Updates A number of additions and updates have been performed on the available Hibernate runtime providers. Hibernate Runtime Provider Updates The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.12.Final and Hibernate Tools version 5.0.5.Final. The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.4.Final and Hibernate Tools version 5.1.3.Final. The Hibernate 5.2 runtime provider now incorporates Hibernate Core version 5.2.8.Final and Hibernate Tools version 5.2.2.Final. Forge Tools Forge Runtime updated to 3.6.1.Final The included Forge runtime is now 3.6.1.Final. Read the official announcement here. What is next? Having JBoss Tools 4.4.4 and Developer Studio 10.4 out we are already working on the next release for Eclipse Oxygen. Enjoy! Jeff Maury [Less]
Posted 4 days ago by nore...@blogger.com (Brian Smith)
We’re proud to announce that N4JS has been accepted as an Eclipse Project and the final official steps are underway. Our team have been working very hard to wrap up the Initial Contribution and are excited to be part of Eclipse. The project will be ... [More] hosted at https://eclipse.org/n4js, although this currently redirects to the project description while our pages are being created. In the meantime, N4JS is already open source - our GitHub project pages are located at http://numberfour.github.io/n4js/ which contains articles, documentation, the source for N4JS and more.Some background information about us:N4JS was developed by Enfore AG, founded in 2009 as NumberFour AG by Marco Boerries. Enfore’s goal is to build an open business platform for 200+ million small businesses and to provide those businesses with the tools and solutions they need to stay competitive in a connected world.Initially, JavaScript was intended as the main language for third-party developers to contribute to our platform; it runs directly in the browser and it’s the language of the web! One major drawback is the absence of a static type system; this turned out to be an essential requirement for us. We wanted to ensure reliable development of our platform and our own applications, as well as making life easier for third-party contributors to the Enfore platform. That’s the reason why we developed N4JS, a general-purpose programming language based on ECMAScript 5 (commonly known as JavaScript). The language combines the dynamic aspects of JavaScript with the strengths of Java-like types to facilitate the development of flexible and reliable applications.N4JS is constantly growing to support many new modern language features as they become available. Some of the features already supported are concepts introduced in ES6 including arrow functions, async/await, modules and much more. Our core team are always making steady improvements and our front end team make use of the language and IDE daily for their public-facing projects. For more information on how the N4JS language differs from other JavaScript variants introducing static typing, see our detailed FAQ.Why Eclipse?For us, software development is much more than simply writing code, which is why we believe in IDEs and Eclipse in particular. We were looking for developer tools which leverage features like live code validation, content assist (aka code completion), quick fixes, and a robust testing framework. Contributors to our platform can benefit from these resources for their own safe and intuitive application development.We tried very hard to design N4JS so that Java developers feel at home when writing JavaScript without sacrificing JavaScript’s support for dynamic and functional features. Our vision is to provide an IDE for statically-typed JavaScript that feels just like JDT. This is why we strongly believe that N4JS could be quite interesting in particular for Eclipse (Java) developers. Aside from developers who are making use of N4JS, there are areas in the development of N4JS itself which would be of particular interest to committers versed in type theory, semantics, EMF, Xtext and those who generally enjoy solving the multitude of challenges involved in creating new programming languages.What’s next?While we are moving the project to Eclipse, there are plenty of important checks that must be done by the Eclipse Intellectual Property Team. The Initial Contribution is under review with approximately thirty Contribution Questionnaires created. This is a great milestone for us and reflects the huge effort involved in the project to date. We look forward to joining Eclipse, taking part in the ecosystem in an official capacity and seeing what the community can do with N4JS. While we complete these final requirements, we want to extend many thanks to all at Eclipse who are helping out with the process so far! [Less]
Posted 5 days ago
The Eclipse IoT community has been working hard on some pretty awesome things over the past few months! Here is a quick summary of what has been happening.Open TestbedsWe recently announced the launch of Eclipse IoT Open Testbeds. Simply put, they ... [More] are collaborations between vendors and open source communities that aim to demonstrate and test commercial and open source components needed to create specific industry solutions.The Asset Tracking Management Testbed is the very first one! It is a collaboration between Azul Systems, Codenvy, Eurotech, Red Hat, and Samsung’s ARTIK team. It demonstrates how assets with various sensors can be tracked in real-time, in order to minimize the cost of lost or damaged parcels. You can learn more about the Eclipse IoT Open Testbeds here.Watch Benjamin Cabé present the Asset Tracking testbed demo in the video below. It was recorded at the Red Hat Summit in Boston this month.⬇https://medium.com/media/9760619803f58c96c48cecf2af59dd18/hrefCase StudyWe have been working with Deutsche Bahn (DB) and DB Systel to create a great case study that demonstrates how open source IoT technology is being used on their German railway system. They are currently using two Eclipse IoT projects, Eclipse Paho and Eclipse Mosquitto, among other technologies. In other words, if you’ve taken a DB train in Germany, you might have witnessed the “invisible” work of Eclipse IoT technology at the station or on board. How awesome is that?!Case Study — Eclipse IoT and DBUpcoming IoT EventsI am currently working on the organization of two upcoming Eclipse IoT Days that will take place in Europe this fall! 🍂 🍁 🍃 We are currently accepting talks for both events. Go on, submit your passion! I am excited to read your proposal :)Eclipse IoT Day @ ThingmonkSeptember 11 | London, UK📢 Email us your proposal iot at eclipse dot orgEclipse IoT Day @ EclipseCon EuropeOctober 24 | Ludwigsburg, Germany📢 Propose a talkI look forward to meeting you in person at both events!— Roxanne (Yes, I decided to sign this blog post.) [Less]
Posted 5 days ago
With the release of Red Hat JBoss Developer Studio 10.2, it is now possible to install Red Hat JBoss Developer Studio as an RPM. It is available as a tech preview. The purpose of this article is to describe the steps you ... [More] should follow in order to install Red Hat JBoss Developer Studio. Red Hat Software Collections JBoss Developer Studio RPM relies on Red Hat Software Collections. You don’t need to install Red Hat Software Collections but you need to enable the Red Hat Software Collections repositories before you start the installation of the Red Hat JBoss Developer Studio. Enabling the Red Hat Software Collections base repository The identifier for the repository is rhel-server-rhscl-7-rpms on Red Hat Enterprise Linux Server and rhel-workstation-rhscl-7-rpms on Red Hat Enterprise Linux Workstation. The command to enable the repository on Red Hat Enterprise Linux Server is: sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms The command to enable the repository on Red Hat Enterprise Linux Workstation is: sudo subscription-manager repos --enable rhel-workstation-rhscl-7-rpms For more information, please refer to the Red Hat Software Collections documentation. JBoss Developer Studio repository As this is a tech preview, you need to manually configure the JBoss Developer Studio repository. Create a file /etc/yum.repos.d/rh-eclipse46-devstudio.repo with the following content: [rh-eclipse46-devstudio-stable-10.x] name=rh-eclipse46-devstudio-stable-10.x baseurl=https://devstudio.redhat.com/static/10.0/stable/rpms/x86_64/ enabled=1 gpgkey=https://www.redhat.com/security/data/a5787476.txt gpgcheck=1 upgrade_requirements_on_install=1 metadata_expire=24h Install Red Hat JBoss Developer Studio You’re now ready to install Red Hat JBoss Developer Studio through RPM. Enter the following command: sudo yum install rh-eclipse46-devstudio Answer &aposy&apos when transaction summary is ready to continue installation. Answer &aposy&apos one more time when you see request to import GPG public key Public key for rh-eclipse46-devstudio .rpm is not installed Retrieving key from https://www.redhat.com/security/data/a5787476.txt Importing GPG key 0xA5787476: Userid : "Red Hat, Inc. (development key) " Fingerprint: 2d6d 2858 5549 e02f 2194 3840 08b8 71e6 a578 7476 From : https://www.redhat.com/security/data/a5787476.txt Is this ok [y/N]: After all required dependencies have been downloaded and installed, Red Hat JBoss Developer Studio is available on your system through the standard update channel !!! You should see messages like the following: Launch Red Hat JBoss Developer Studio From the system menu, mouse over the Programming menu, and the Red Hat Eclipse menu item will appear. Select this menu item and Red Hat JBoss Developer Studio user interface will appear then: Enjoy! Jeff Maury [Less]
Posted 5 days ago by cedric...@obeo.fr (Cédric Brun)
Every year the Eclipse M7 milestone act as a very strong deadline for the projects which are part of the release train: it’s then time for polishing and refining! Time's up ! Pencils down, it's M7 !— Cédric Brun (@bruncedric) 4 mai 2010 When ... [More] your company is responsible for a number of inter-dependent projects some of them core technologies like EMF Services , the GMF Runtime, others user facing tools like Acceleo, Sirius or EcoreTools, packaging and integration oriented projects like Amalgam or the Eclipse Packaging project and all of these releases needs to be coordinated, then may is a busy month. This week: M7 milestones for EcoreTools, Amalgam, Sirius, testing the Modeling package. Plot twist: 3 work days ! pic.twitter.com/msqQkImRu4— Cédric Brun (@bruncedric) 3 mai 2016 I’m personally involved in EcoreTools which makes me in the position to step in the role of the consumer of the other technologies and my plan for Oxygen was to make use of the Property Views support included in Sirius. This support allows me, as the maintainer of EcoreTools to specify directly through the .odesign every Tab displayed in the properties view. Just like the rest of Sirius it is 100% dynamic, no need for code generation or compilation, and complete flexibility with the ability to use queries in every part of the definition. Before Oxygen EcoreTools already had property editors. Some of them were coded by hand and were developed more than 8 years ago. When I replaced the legacy modeler by using Sirius I made sure at that time to reuse those highly tuned property editors. Others I generated using the first generation of the EEF Framework so that I could cover every type of Ecore and benefit from the dialogs to edit properties using double click. The intent at that time was to make the modeler usable in fullscreen when no other view is visible. Because of this requirement I had to wait for the Sirius team to make its magic: the properties views support was ready for production with Sirius 4.1, but this was not including any support for dialogs and wizards yet. Then magic happened: the support for dialogs and wizards is now completely merged in Sirius, starting with M7. In EcoreTools the code responsible for those properties editors represents more than 70% of the total code which peaks at 28K. Lines of Java code subject to deletion in EcoreTools In gray those are the plugins which are subject to removal once I use this new feature, as a developer one can only rejoice at the idea of deleting so much code!. I went ahead and started working on this, the schedule was tight but thanks to the ability to define reflective rules using Dynamic Mappings I could quickly cover everything in Ecore and get those new dialogs working. New vs old dialogs Just by using a dozen reflective rules and adding specific Pages or Widgets when needed. The tooling definition in ecore.odesign It went so fast I could add new tools for the Generation Settings through a specific tab. Genmodel properties exposed through a specific tab And even introduce a link to directly navigate to the Java code generated from the model: Link opening the corresponding generated Java code. Even support for EAnnotations could be implemented in a nice way: Tab to add, edit or delete any EAnnotation As a tool provider I could focus on streamlining the experience, providing tabs and actions so that the end user don’t have to leave the modeler to adapt the generation settings or launch the code generation, give visual clues when something is invalid. I went through many variants of these UIs just to get the feel of it, as I get an instant feedback I only need minutes to rule out an option. I have a whole new dimension I can use to make my tool super effective. This is what Sirius is about, empowering the tool provider to focus on the user experience of its users. It is just one of the many changes which we’ve been working on since last year to improve the user experience of modeling tools, Mélanie and Stéphane will present a talk on this very subject during EclipseCon France at Toulouse: “All about UX in Sirius.”. All of these changes are landing in Eclipse Oxygen starting with M7, those are newly introduced and I have no doubt I’ll have some polishing and refining to do, I’m counting on you to report anything suspicious EcoreTools: user experience revamped thanks to Sirius 5.0 was originally published by Cédric Brun at CTO @ Obeo on May 19, 2017. [Less]
Posted 6 days ago
We worked with Deutsche Bahn (DB) to find out how they use Eclipse IoT technology on their railway system!
Posted 6 days ago by nore...@blogger.com (Kim Moir)
I moved my blog to WordPress.New location is here https://kimmoir.blog/