Very High Activity
I Use This!

News

Analyzed 2 days ago. based on code collected 5 days ago.
Posted 18 days ago by sharan
Apache OFBiz News July 2016 Welcome to our regular monthly round-up of OFBiz news. This month we have news about the new build system in our trunk, the introduction of unit tests, ongoing support for our unreleased branches and the community selects ... [More] a new logo. Changeover from Ant to Gradle in the OFBiz Trunk As mentioned in our last update, a patch was being prepared to changeover our existing build system (Ant) to Gradle. A key driver of the change was to remove external dependencies from the source code. In future releases Gradle will automatically download any dependencies. A lot of hard work was done and during July the patch was applied to the trunk. This is a significant step for the project as this is a major change. Thanks to everyone who helped with reviewing, testing, and removal of dependencies. Ensuring that all the existing functionality available with Ant and was also was available in Gradle is still ongoing, with some clean-up also being done. There are still a few dependencies left to remove and work is in progress to finalise these. Introduction of Unit Tests Our existing code made use of integration tests rather than unit tests. With the change to Gradle, we now have the ability to introduce unit tests and Test Driven Development (TDD). This is something that will improve the quality of the code and also ensure that developers make sure their code is tested. The unit test setup is now in place in the trunk and an initial patch with unit tests for the start component has been submitted. All developers are being encouraged to begin writing and including more unit tests. Support for 14.12 and 15.12 With the change of build system in the trunk, it was important that current users and service providers have access to extended support for the existing codebase. There are currently two unreleased branches 14.12 (created in December 2014) and 15.12 (created in December 2015) that our service providers and developers have been customising for their customer implementations. To ease the transition and to keep the impact low, the community has agreed to backport bug fixes and improvements into these branches until July 2017. New Project Logo This month the community has been discussing whether or not to change the project logo. The OFBiz trademark registration has been finalised and this seemed a good point to talk about any potential changes. Changing a logo is a significant move as it forms part of the project identity so after a lot of community discussion, three potential design options were selected for a community vote. They were as follows: Option 1: Based on our Existing Logo Option 2: Based on Correct Project Name Spelling and New ASF Feather Option 3: Based on our old OFBiz Power Button Anyone from the community could vote and the vote was open 5 days. The results were summarised at the following wiki page: OFBiz Logo Survey Results The most popular selection was Option 3 which uses the icon that was used in the original OFBiz logo when the project first came to Apache. The icon has been re-worked to use the same colours as the new ASF feather. New Features and Improvements Functional enhancements and improvements as well as updates of third party libraries and source code refactoring: Improved the distortion in the UI of the payment section when there is a billing account present for a customer while placing a sales order (OFBIZ-7484) Enforced noninstantiability to multiple classes (OFBIZ-7601) (OFBIZ-7588) (OFBIZ-7562) (OFBIZ-7551) (OFBIZ-7690) (OFBIZ-7715) (OFBIZ-7732) (OFBIZ-7590) (OFBIZ-7600) (OFBIZ-7710) (OFBIZ-7733) (OFBIZ-7593) (OFBIZ-7630) (OFBIZ-7541) (OFBIZ-7740) (OFBIZ-7685) (OFBIZ-7742) (OFBIZ-7687) (OFBIZ-7688) (OFBIZ-7686) (OFBIZ-7744) (OFBIZ-7689) (OFBIZ-7691) (OFBIZ-7684) (OFBIZ-7692) (OFBIZ-7708) (OFBIZ-7716) (OFBIZ-7717) Added content lookup when adding Content to Product Config Item (OFBIZ-7629) Removed Google Checkout and Google Base components from specialpurpose as they were discontinued (OFBIZ-7705) (OFBIZ-7727) Removed HtmlScreenRenderer Class after removing its dependency (OFBIZ-7635) Removed HtmlFormRenderer Class after removing its dependency (OFBIZ-7634) Replaced Apache Ant with Gradle (OFBIZ-7534) Allowed Gradle to generate JavaDocs even if they contain bad formatting (OFBIZ-7775) Renamed generated *ofbiz-gradle.jar* to *ofbiz.jar (OFBIZ-7893) Created a (short term) Gradle "cleanAnt" task to remove old build dirs (OFBIZ-7898) Hidden user inputs for Location/Lot# for fully issued components against production run task (OFBIZ-7522) Added new entity Check as payment method (OFBIZ-7682) Added a look-up for Product Id at "Add Product Store Surveys" screen (OFBIZ-7702) Migrated promotext.properties to UiLabels (OFBIZ-7297) Reformatted multiple FTLs for better readability, no functional changes (OFBIZ-7678) (OFBIZ-7679) (OFBIZ-7636) Added a new procces on entity-auto for "create invocation" to automatically populate the field "changeUserLoginId" and "statusDate" for EntityStatus. The purpose is to track the userlogin for a status change and apply that to all entities that cover the EntityStatus concept (OFBIZ-7611) (OFBIZ-7617) Added Province data for Turkey via GeoData_TR.xml and the address format for Turkey in GeoData.xml (OFBIZ-7755) Removed the ability to persist entries in the file system from UtilCache (OFBIZ-7760) Removed the watermarker jar and the code that was dependent on it because Watermarker is a dead project and the jar is no longer publicly available Removed a series of artifacts dependent on the old Beanshell jar that is going to be removed from the project, also removed all bash libraries and remaining bsh functionalities (OFBIZ-7763) Improved the FinAccountStatus, ShipmentStatus and BudgetStatus entities to manage the "changeByUserLoginId" field along with the conversion of the minilang services to entity-auto (OFBIZ-7623) (OFBIZ-7618) (OFBIZ-7619) Added Province data for South Africa via GeoData_ZA.xml and address format for South Africa in GeoData.xml (OFBIZ-7778) Cleaned the tools directory (OFBIZ-7795) Migrated all java files from /src to /src/main/java (OFBIZ-7790) Renamed OFBiz artifacts from org.ofbiz.* to org.apache.ofbiz.* (OFBIZ-6274) Renamed search.properties in specialpurpose/lucene to lucene.properties (OFBIZ-6224) Added download definition for drivers of commonly used open source rdbms to build.gradle (OFBIZ-7793) Moved SeoConfig.xml from product to e-commerce (OFBIZ-6125) Commented out the auto-detect font for apache fop (OFBIZ-6274) Added pagination targets on 'BillingAccountForms', 'CostForms' and 'AP/AR-InvoiceForms' (OFBIZ-7858) Cleaned up commented out code in Java source for Accounting and Content (OFBIZ-7826) (OFBIZ-7838) Cleaned up commented out code in Free Marker Template for Accounting (OFBIZ-7860) Improved payment method information UI on "party profile" screen for creating new payment methods (OFBIZ-7707) Improved ViewCertificate to use widgets instead of FTL (OFBIZ-6302) Introduced unit testing to OFBiz for components in /src/test/java (OFBIZ-7254) Introduced unit tests to the start component (OFBIZ-7897) Changed logger level from "info" to "all" for org.apache.ofbiz (OFBIZ-6274) Moved CertKeystore.groovy to "framework/common/groovyScripts" according to best practice (OFBIZ-7892) Created demo PartyStatus data for existing parties for the specialpurpose component (OFBIZ-7672) Created demo PartyStatus data for existing parties for the applications component (OFBIZ-7673) Removed the pos component (OFBIZ-7804) (OFBIZ-7529) (OFBIZ-7908) Removed the testlist OFBiz server command (OFBIZ-7924) Added the OWASP dependency check plugin for "Copy external jars in OFBiz $buildDir/externalJars for (at least) dependency check" (OFBIZ-7930) Commented out the downloads of the main DBMS JDBC drivers (MySql and PostgreSQL) (OFBIZ-7793) Migrated promotext_zh.properties and promotext_zh_TW.properties to ProductPromoUiLabels.xml (OFBIZ-7297) Bugfixes Functional and technical bugfixes: TrialBalance PDF export fails (OFBIZ-6638) Income Statement PDF export fails (OFBIZ-7514) Balance Sheet PDF export fails (OFBIZ-7515) Order Discount Code Report is not working (OFBIZ-7315) Product Demand Report is not working (OFBIZ-7316) Error on product detail page (OFBIZ-7212) Small UI issue at project overview (OFBIZ-7305) Missing required client side validation on sending BIRT report by mail (OFBIZ-7421) Wrong UI Labels for Forum group name on forum group roles and purposes screens (OFBIZ-7676) Invalid content was found starting with element 'xls' (OFBIZ-7699) Error on cancelling agreement (OFBIZ-7143) While adding a new skill to any party the old skills disappear from the party skill list (OFBIZ-7560) Removed mistakenly added code (OFBIZ-7571) Entered "toName" is not getting stored when creating Party Invitation (OFBIZ-7599) Unable to create a new communication from LEAD in SFA (OFBIZ-6421) The alt-target tag is not working as expected in the Form Widget (OFBIZ-7513) Checks --> Print (PDF) should open in a new window (OFBIZ-7193) Duplicated product feature groups associated with a category when duplicating category and selected option to duplicate feature (OFBIZ-7258) Multiple components: Checkbox and Radio buttons should get selected when clicking on their labels (OFBIZ-7577) (OFBIZ-7578) (OFBIZ-7580) (OFBIZ-7582) (OFBIZ-7583) (OFBIZ-7584) (OFBIZ-7585) (OFBIZ-7667) (OFBIZ-7668) (OFBIZ-7669) The "ALL" Checkbox for status field in Order List does not work properly (OFBIZ-7553) Unable to create Product Store Roles from Party manager (OFBIZ-7518) Pricing error in Variant Products when setup with VAT and price set on Virtual Product (OFBIZ-6576) The 'Issue Component' option after partial issuance against required component quantity is not accounting for already issued quantity (OFBIZ-7512) Unable to set "thruDate" for "List survey" screen of the project component if more than one survey in available (OFBIZ-7703) Success message should be shown on screen for successfully applied promotion (OFBIZ-7654) "Tasks" menu is not showing as selected when clicked in the scrum component (OFBIZ-7652) Shipping charges reset to ZERO on updating the purchase order item quantity (OFBIZ-7063) When loading with a derby database - Error adding foreign key: ModelEntity was null for related entity name Tenant (OFBIZ-7750) Missing field "parentTypeId" in the DeductionType entity (OFBIZ-7751) UI improvements on XML Data Export screen: label "Entity Names:" not positioned correctly; "Entity Sync Dump:" text box not visible in all the themes except Tomahawk (OFBIZ-7443) Inconsistent UI for Update and Expire button in the "Facility Contact Information" screen (OFBIZ-7342) Wrong AddedNoteCustRequestNotification.ftl path in CustRequestScreens.xml Overview of questions in EditSurveyQuestions.ftl does not paginate properly (OFBIZ-6214) Catalog: Product Store Group from Product Store Group List item doesn't open when clicking on it (OFBIZ-7361) Removed TaxAuthorityVATReport forms and the related controller request, as it's marked as WIP since 2009 (OFBIZ-7764) Converted Minilang code that was using the deprecated "call-bsh" element to use the "script" element with Groovy (OFBIZ-7765) Multiple issues in the gradle eclipse plugin (OFBIZ-7779) Bug in OFBizSetup Create Customer Step (OFBIZ-7797) IterateOverActiveComponents exists twice (OFBIZ-7749) Removed unused imports from groovy files from workerffort and hhfacility (OFBIZ-7761) (OFBIZ-7829) "File not found" exception in export to ebay (OFBIZ-7700) Running MRP shows all types of facilities, only facilities of type "WAREHOUSE" should be listed (OFBIZ-7168) Product look-up not available while adding items in the shopping list (OFBIZ-7823) Renamed selectall.js to OfbizUtil.js (OFBIZ-1319) UiLabels missing on Examples PDF (OFBIZ-7525) Attribute Name should not be allowed to edit while updating Party Attribute record (OFBIZ-7561) Creating CustReq from CommEvent shows error on screen (OFBIZ-7435) FromDate and ThruDate shows empty for WorkEffort Children (OFBIZ-7663) Broken link to "View Customer request" in email sent to the customer (OFBIZ-7844) Parent Comm Event Id rendering on the "Edit Communication Event" form is distorted (OFBIZ-7840) Missing UI Label resource in the main-decorator for the SFA component (OFBIZ-7825) Party content in party component is not getting updated (OFBIZ-7612) Issue in the SFA "Lead Profile" view in the "quick add" form when a group is provided (OFBIZ-7843) Broken screen on "Go Back" from the "Edit Contact Mech" screen in the scrum component (OFBIZ-7712) "parentCommEventId" does not get passed as a parameter from "Edit Communication Event" (OFBIZ-7752) Unwanted input box on OrderList screen (OFBIZ-7836) Removal of old OFBiz images from images folder (OFBIZ-7919) Gradle tasks not running on Windows (OFBIZ-7815) ListGlAccountsReport should open in a new window (OFBIZ-7925) Pagination through marketing campains is broken (OFBIZ-7922 "find Total Backlog Item" in the scrum component is not working in a non-English language (OFBIZ-7929) Error when creating PartyTaxAuthInfo (OFBIZ-7442) [Less]
Posted 20 days ago by Dave Koelmeyer
By default, JSPWiki displays differences between page versions line-by-line. While serviceable for general use, this is often not fine-grained enough to easily identify specific changes – especially in pages with lots of content. In JSPWiki this ... [More] setting is known as the difference provider. You can configure alternative difference providers, one of which is the ContextualDiffProvider. The ContextualDiffProvider enables displaying of page version differences on a word-by-word basis. Here's an example of page differences with the default TraditionalDiffProvider enabled (click to enlarge): Note that while only one word and one character has changed, the entire sentence is highlighted. By comparison here are the same changes with the ContextualDiffProvider enabled instead (click to enlarge): Clicking on the blue highlighted double arrows to the left or right of a change will jump to the previous or next change respectively (click to enlarge): You can find more information on JSPWiki's difference providers here, and you can make the setting in your jspwiki-custom.properties file. [Less]
Posted 22 days ago by Sally
Welcome August! Whilst many are on holidays, the Apache Community at-large is always productive. Here's what happened over the past week: ASF Board –management and oversight of the business and affairs of the corporation in accordance with ... [More] the Foundation's Bylaws. - Next Board Meeting: 17 August 2016. Board calendar and minutes available at http://apache.org/foundation/board/calendar.html ASF Infrastructure –our distributed team on four continents keeps the ASF's infrastructure running around the clock. - 7M+ weekly checks yield fab performance at 99.60% uptime http://status.apache.org/ ApacheCon™ –the official conference series of The Apache Software Foundation. - CFP and registration open for Apache: Big Data and ApacheCon Europe/Seville http://apachecon.com/ Apache Attic –provides process and solutions to make it clear when an Apache project has reached its end of life. - Apache Tuscany retired http://attic.apache.org/projects/tuscany.html Apache Jackrabbit™ Oak –a scalable, high-performance hierarchical content repository designed for use as the foundation of modern world-class Web sites and other demanding content applications. - Apache Jackrabbit Oak 1.5.7 released http://jackrabbit.apache.org/downloads.html Apache Knox™ –a REST API Gateway for providing secure access to the data and processing resources of Apache Hadoop clusters. - Apache Knox 0.9.1 released http://www.apache.org/dyn/closer.cgi/knox/0.9.1 Did You Know?  - Did you know that Uber uses Apache Aurora, Cassandra, Hadoop, Kafka, Mesos, Spark, Storm, and Thrift at the core of its tech stack? http://aurora.apache.org/, http://cassandra.apache.org/, http://hadoop.apache.org/, http://kafka.apache.org/, http://mesos.apache.org/, http://spark.apache.org/, http://storm.apache.org/, http://thrift.apache.org/  - Did you know that Southwest's OpsSuite uses Apache Geode (incubating) to show real time logistics operational status? 10M messages consumed; 1M schedules optimized in seconds! http://geode.incubator.apache.org/  - Did you know that the ASF is an hiring Infrastructure Systems Administrator/Architect? https://blogs.apache.org/infra/entry/position_available_infrastructure_systems_administrator Apache Community Notices:  - Find out how you can participate with Apache community/projects/activities https://helpwanted.apache.org/  - The list of Apache project-related MeetUps can be found at http://apache.org/events/meetups.html  - Atlanta Hadoop Users Group present Cutting edge with HBASE on 17 August 2016 http://www.meetup.com/Atlanta-Hadoop-Users-Group/events/230344766/  - CFP is open for the next Cassandra Summit 7-9 September 2016 in San Jose https://cfp.cassandrasummit.org/  - CloudStack Collaboration Conference Brasil will take place 29-30 September 2016 in Sao Paolo http://cloudstack.usp.br/en/index.php  - ApacheCon Europe will take place 14-18 November 2016 in Seville, Spain http://apachecon.com/  - The second ASF Annual Report is available at https://s.apache.org/pTMX  - Are your software solutions Powered by Apache? Download & use our "Powered By" logos http://www.apache.org/foundation/press/kit/#poweredby  - Show your support for Apache with ASF-approved swag from http://www.zazzle.com/featherwear and http://s.apache.org/landsend--all proceeds benefit the ASF!  = = = For real-time updates, sign up for Apache-related news by sending mail to announce-subscribe@apache.org and follow @TheASF on Twitter. For a broader spectrum from the Apache community, https://twitter.com/PlanetApache provides an aggregate of Project activities as well as the personal blogs and tweets of select ASF Committers. # # # [Less]
Posted 26 days ago by Dave Koelmeyer
Continuing on from yesterday's blog post on Smart Typing Pairs, let's take a look at another time-saving feature in JSPWiki's new plain editor: Tab Completion. With Tab Completion enabled, certain keywords typed into the editor will expand to ... [More] a full text snippet when followed by a Tab keystroke. Tab Completion can be enabled from the Haddock editor's settings menu (click to enlarge): A good example is adding a Table of Contents. Typically this is achieved by manually entering the following markup: [{TableOfContents}] To instead add a Table of Contents using Tab Completion, simply enter the keyword toc followed by a Tab keystroke. JSPWiki will automatically expand the keyword to the full text snippet. Some other useful examples include the sign keyword which expands to a user's signature and date, and the quote keyword which will create fancy formatting for quoted content. There's a range of additional keywords available, which you can view here (and for the more adventurous users the snippets can also be customised). [Less]
Posted 26 days ago by Dave Koelmeyer
A great new feature in JSPWiki's Haddock editor is Smart Typing Pairs. When enabled, certain characters when typed will automatically be balanced with their closing counterparts. Example characters include quotation marks, parentheses, and curly and ... [More] square brackets. This saves time when entering content, as one does not have to manually close these characters – the Haddock editor will do it for you. You can enable Smart Typing Pairs in the Haddock editor's settings menu (click to enlarge): The preference will be remembered, so it's a one-time setting. Now enter a square bracket or double quotation mark, and you'll see the editor will close the character for you (click to enlarge): When creating (for example) links with Smart Typing Pairs enabled, one only has to enter the opening square bracket ("[") and then paste the desired address – Haddock will take care of the rest. You can also select existing content and enter the relevant opening character, which will automatically wrap the selection in the correct associated closing character. In the following example we wish to add double quotation marks to the Haddock Template text (click to enlarge): First select the text (click to enlarge): And now simply enter Shift+" on the keyboard to enter a double quotation mark. Haddock automatically adds the associated closing character around the selection (click to enlarge): For more information please check out the Haddock Editor documentation. [Less]
Posted 26 days ago by Dave Koelmeyer
A great new feature in JSPWiki's Haddock editor is Smart Typing Pairs. When enabled, certain characters when typed will automatically be balanced with their closing counterparts. Example characters include quotation marks, parentheses, and curly and ... [More] square brackets. This saves time when entering content, as one does not have to manually close these characters – the Haddock editor will do it for you. You can enable Smart Typing Pairs in the Haddock editor's settings menu (click to enlarge): The preference will be remembered, so it's a one-time setting. Now enter a square bracket or double quotation mark, and you'll see the editor will close the character for you (click to enlarge): When creating (for example) links with Smart Typing Pairs enabled, one only has to enter the opening square bracket ("[") and then paste the desired address – Haddock will take care of the rest. You can also select existing content and enter the relevant opening character, which will automatically wrap the selection in the correct associated closing character. In the following example we wish to add double quotation marks to the Haddock Template text (click to enlarge): First select the text (click to enlarge): And now simply enter Shift+" on the keyboard to enter a double quotation mark. Haddock automatically adds the associated closing character around the selection (click to enlarge): For more information please check out the Haddock Editor documentation. [Less]
Posted 29 days ago by Sally
As we're wrapping up the month, let's review the Apache Community's activities over the past week: ASF Board –management and oversight of the business and affairs of the corporation in accordance with the Foundation's Bylaws. - Next Board ... [More] Meeting: 17 August 2016. Board calendar and minutes available at http://apache.org/foundation/board/calendar.html ASF Infrastructure –our distributed team on four continents keeps the ASF's infrastructure running around the clock. - 7M+ weekly checks yield stable performance at 99.43% uptime http://status.apache.org/ ApacheCon™ –the official conference series of The Apache Software Foundation. - CFP and registration open for Apache: Big Data and ApacheCon Europe/Seville http://apachecon.com/ Apache Airavata™ –a software framework providing APIs, sophisticated server-side tools, and graphical user interfaces to construct, execute, control and manage long running applications and workflows on distributed computing resources. - Apache Airavata 0.16 released http://airavata.apache.org/development.html Apache Directory Server™ –an extensible and embeddable directory server entirely written in Java, which has been certified LDAPv3 compatible by the Open Group. - ApacheDS 2.0.0-M23 released http://directory.apache.org/apacheds/downloads.html Apache Fortress™ –a sub-project of Apache Directory that provides role-based access control, delegated administration and password policy services with LDAP. - Apache Fortress 1.0.1 released http://directory.apache.org/fortress/downloads.html Apache Jackrabbit™ Oak – a scalable, high-performance hierarchical content repository designed for use as the foundation of modern world-class Web sites and other demanding content applications. - Apache Jackrabbit Oak 1.5.6 released http://jackrabbit.apache.org/downloads.html Apache Kudu™ –Open Source columnar storage engine enables fast analytics across the Internet of Things, time series, cybersecurity, and other Big Data applications in the Apache Hadoop ecosystem. - The Apache Software Foundation Announces Apache® Kudu™ as a Top-Level Project https://s.apache.org/9OIw Apache Kylin™ –an Open Source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop, supporting extremely large datasets. - Apache Kylin 1.5.3 released https://kylin.apache.org/download/ Apache Mesos™ –mature Open Source cluster resource manager, container orchestrator, and distributed operating systems kernel. - The Apache Software Foundation Announces Apache® Mesos™ v1.0 https://s.apache.org/qbx4 Apache Open Climate Workbench™ –a comprehensive suite of algorithms, libraries, and interfaces designed to standardize and streamline the process of interacting with large quantities of observational data (such as is provided by the RCMED) and conducting regional climate model evaluations. - Apache Open Climate Workbench 1.1.0 released http://climate.apache.org/downloads.html Apache Traffic Server™ –a fast, scalable and extensible HTTP/1.1 compliant caching proxy server. - Apache Traffic Server 6.2.0 released http://trafficserver.apache.org/downloads Apache Twill™ –Open Source abstraction layer over Apache Hadoop® YARN simplifies developing distributed Hadoop applications. - The Apache Software Foundation Announces Apache® Twill™ as a Top-Level Project https://s.apache.org/Rzsf Did You Know?  - Did you know that the ASF has a Code of Conduct for Apache Projects and Communities? http://www.apache.org/foundation/policies/conduct  - Did you know that Apache CouchDB has a series of blog posts that highlight The Road to CouchDB 2.0? https://blog.couchdb.org/2016/07/25/the-road-to-couchdb-2-0/  - Did you know that the ASF is an hiring Infrastructure Systems Administrator/Architect? https://blogs.apache.org/infra/entry/position_available_infrastructure_systems_administrator Apache Community Notices:  - Find out how you can participate with Apache community/projects/activities https://helpwanted.apache.org/  - The list of Apache project-related MeetUps can be found at http://apache.org/events/meetups.html  - Atlanta Hadoop Users Group present Cutting edge with HBASE on 17 August 2016 http://www.meetup.com/Atlanta-Hadoop-Users-Group/events/230344766/  - CFP is open for the next Cassandra Summit 7-9 September 2016 in San Jose https://cfp.cassandrasummit.org/  - CloudStack Collaboration Conference Brasil will take place 29-30 September 2016 in Sao Paolo http://cloudstack.usp.br/en/index.php  - ApacheCon Europe will take place 14-18 November 2016 in Seville, Spain http://apachecon.com/  - The second ASF Annual Report is available at https://s.apache.org/pTMX  - Are your software solutions Powered by Apache? Download & use our "Powered By" logos http://www.apache.org/foundation/press/kit/#poweredby  - Show your support for Apache with ASF-approved swag from http://www.zazzle.com/featherwear and http://s.apache.org/landsend--all proceeds benefit the ASF!  = = = For real-time updates, sign up for Apache-related news by sending mail to announce-subscribe@apache.org and follow @TheASF on Twitter. For a broader spectrum from the Apache community, https://twitter.com/PlanetApache provides an aggregate of Project activities as well as the personal blogs and tweets of select ASF Committers. # # # [Less]
Posted about 1 month ago by Sally
Mature Open Source cluster resource manager, container orchestrator, and distributed operating systems kernel in use at Netflix, Samsung, Twitter, and Yelp, among others. Forest Hill, MD —27 JULY 2016— The Apache Software Foundation (ASF) ... [More] , the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of Apache® Mesos™ v1.0, the mature clustering resource management platform. Apache Mesos provides efficient resource isolation and sharing across distributed applications in Cloud environments as well as private datacenters. Mesos is a cluster resource manager, a container orchestrator, and a distributed operating systems kernel. "At Berkeley in 2009, we were thinking about a new way to manage clusters and Big Data, and Mesos was born," said Benjamin Hindman, Vice President of Apache Mesos, one of the original creators of the project, and Chief Architect/Co-Founder of Mesosphere. "Mesos v1.0 is a major milestone for the community." Mesos entered the Apache Incubator in 2010 and has had 36 releases since becoming a Top-Level Project (TLP) in 2013. Under The Hood Apache Mesos 1.0 includes a number of new and important features that include: New HTTP API: One of the main areas of improvement in the 1.0 release, this API simplifies writing Mesos frameworks by allowing developers to write frameworks in any language via HTTP. The HTTP API also makes it easy to run frameworks behind firewalls and inside containers. Unified containerizer: This allows frameworks to launch Docker/Appc containers using the Mesos containerizer without relying on docker daemon (engine) or rkt. The isolation of the containers is done using isolators. CNI support: The network/cni isolator has been introduced in the Mesos containerizer to implement the Container Network Interface (CNI) specification proposed by CoreOS. With CNI, the network/cni isolator is able to allocate a network namespace to Mesos containers and attach the container to different types of IP networks by invoking network drivers called CNI plugins. GPU support: Support for using Nvidia GPUs as a resource in the Mesos "unified" containerizer. This support includes running containers with and without filesystem isolation (i.e., running both imageless containers as well as containers using a Docker image) Fine-grained authorization: Many of Mesos' API endpoints have added authentication and authorization, so that operators can now control which users can view which tasks/frameworks in the web UI and API, in addition to fine-grained access control over other operator APIs such as reservations, volumes, weights, and quota. Mesos on Windows: Support for running Mesos on the Windows operating system is currently in beta. The Mesos community is aiming for full support by late 2016. Over the years, Mesos has gained popularity with datacenter operators for being the first Open Source platform capable of running containers at scale in production environments, using both Docker containers and directly with Linux control groups (cgroups) and namespace technologies. Mesos' two-level scheduler distinguishes the platform as the only one that allows distributed applications such as Apache Spark, Apache Kafka, and Apache Cassandra to schedule their own workloads using their own schedulers within the resources originally allocated to the framework and isolated within a container. "Initially the big breakthrough was this new way to run containers at scale, but the beauty of the design of Mesos and its two-level scheduler has proven to be its ability to run not only containers, but Big Data frameworks, storage services, and other applications all on the same cluster," added Hindman. "Mesos has become a core technology that serves as a kernel for other systems to be built on top, so the maturity on the API has been a big focus, and it’s one of the main areas of improvement in the 1.0 release."  These capabilities have distinguished Apache Mesos as the kernel of choice for many Open Source and commercial offerings. One of Mesos' earliest and most notable users was Twitter, who leveraged the Mesos architecture to kill the "Fail Whale" by handling its massive growth in site traffic. Prominent Mesos contributors and users include IBM, Mesosphere, Netflix, PayPal, Yelp, and many more. "We use Mesos regularly at NASA JPL - we are leveraging Mesos to manage cluster resources in concert with Apache Spark identify Mesoscale Convective Complexes (MCC) or extreme weather events in satellite infrared data. Mesos has performed well in managing a high memory cluster for our team," said Chris A. Mattmann, member of the Apache Mesos Project Management Committee, and Chief Architect, Instrument and Science Data Systems Section at NASA JPL. "We have also taken steps to integrate the Apache OODT data processing framework used in our missions with Apache Mesos." Learn more about Apache Mesos at MesosCon Europe 2016 conference in Amsterdam 31 August-1 September 2016, and at MesosCon Asia 2016 in Hangzhou, China 18-19 November 2016. Availability and Oversight Apache Mesos software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Mesos, visit http://mesos.apache.org/ and https://twitter.com/ApacheMesos About The Apache Software Foundation (ASF) Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 5,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, OPDi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF © The Apache Software Foundation. "Apache", "Mesos", "Apache Mesos", "Cassandra", "Apache Cassandra", "Kafka", "Apache Kafka", "OODT", "Apache OODT", "Apache Spark", "Apache Spark", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners. # # # [Less]
Posted about 1 month ago by Sally
Open Source abstraction layer over Apache Hadoop® YARN simplifies developing distributed Hadoop applications. Forest Hill, MD –27 July 2016– The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more ... [More] than 350 Open Source projects and initiatives, announced today that Apache® Twill™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles. Apache Twill is an abstraction over Apache Hadoop® YARN that reduces the complexity of developing distributed Hadoop applications, allowing developers to focus more on their application logic. "The Twill community is excited to graduate from the Apache Incubator to a Top-Level Project," said Terence Yim, Vice President of Apache Twill and Software Engineer at Cask. "We are proud of the innovation, creativity and simplicity Twill demonstrates. We are also very excited to bring a technology so versatile in Hadoop into the hands of every developer in the industry." Apache Twill provides rich built-in features for common distributed applications for development, deployment, and management, greatly easing Hadoop cluster operation and administration. "Enterprises use big data technologies - and specifically Hadoop - to drive more value," said Patrick Hunt, member of the Apache Software Foundation and Senior Software Engineer at Cloudera. "Apache Twill helps streamline and reduce complexity of developing distributed applications and its graduation to an Apache Top-Level Project means more people will be able to take advantage of Apache Hadoop YARN more easily." "This is an exciting and major milestone for Apache Twill," said Keith Turner, member of the Apache Fluo (incubating) Project Management Committee, which used Twill in the development of Fluo, an Open Source project that makes it possible to update the results of a large-scale computation, index, or analytic as new data is discovered. "Early in development, we knew we needed a standard way to launch Fluo across a cluster, and we found Twill. With Twill, we quickly and easily had Fluo running across many nodes on a cluster."  Apache Twill is in production by several organizations across various industries, easing distributed Hadoop application development and deployment. Twill originated at Cask in early 2013. After 7 major releases, the project was submitted to the Apache Incubator in November of 2013. "Apache Twill has come a long way through The Apache Software Foundation, and we're thrilled it has become an ASF Top-Level Project," said Nitin Motgi, CTO of Cask. "Apache Twill has become a key component behind the Cask Data Application Platform (CDAP), using YARN containers and Java threads as the processing abstraction. CDAP is an Open Source integration and application platform that makes it easy for developers and organizations to quickly build, deploy and manage data applications on Apache Hadoop and Apache Spark." "The Apache Twill community worked extremely well within the incubator environment, developing and collaborating openly to follow The Apache Way," said Henry Saputra, ASF Member and member of the Apache Twill Project Management Committee. "There is a tremendous demand for effective APIs and virtualization for developing big data applications and Apache Twill fills that need perfectly. We’re looking forward to continuing the journey with Apache Twill as a Top-Level Project." Catch Apache Twill in action at: JavaOne, 18-22 September 2016 in San Francisco Strata+Hadoop World, 27-29 September 2016 in New York City Availability and Oversight Apache Twill software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Twill, visit http://twill.apache.org/ and follow @ApacheTwill.  About the Apache Incubator The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/ About The Apache Software Foundation (ASF) Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 5,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, OPDi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF ©The Apache Software Foundation. "Apache", "Twill", "Apache Twill", "Hadoop", "Apache Hadoop", "Apache Hadoop YARN", "Spark", "Apache Spark", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners. # # # [Less]
Posted about 1 month ago by Sally
Open Source columnar storage engine enables fast analytics across the Internet of Things, time series, cybersecurity, and other Big Data applications in the Apache Hadoop ecosystem Forest Hill, MD –25 July 2016– The Apache Software Foundation ... [More] (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today that Apache® Kudu™ has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles. Apache Kudu is an Open Source columnar storage engine built for the Apache Hadoop ecosystem designed to enable flexible, high-performance analytic pipelines. "Under the Apache Incubator, the Kudu community has grown to more than 45 developers and hundreds of users," said Todd Lipcon, Vice President of Apache Kudu and Software Engineer at Cloudera. "Recognizing the strong Open Source community is a testament to the power of collaboration and the upcoming 1.0 release promises to give users an even better storage layer that complements Apache HBase and HDFS." Optimized for lightning-fast scans, Kudu is particularly well suited to hosting time-series data and various types of operational data. In addition to its impressive scan speed, Kudu supports many operations available in traditional databases, including real-time insert, update, and delete operations. Kudu enables a "bring your own SQL" philosophy, and supports being accessed by multiple different query engines including such other Apache projects as Drill, Spark, and Impala (incubating). Apache Kudu is in use at diverse companies and organizations across many industries, including retail, online service delivery, risk management, and digital advertising. "Using Apache Kudu alongside interactive SQL tools like Apache Impala (incubating) has allowed us to deploy a next-generation platform for real-time analytics and online reporting," said Baoqiu Cui, Chief Architect at Xiaomi. "Apache Kudu has been deployed in production at Xiaomi for more than six months and has enabled us to improve key reliability and performance metrics for our customers. Kudu's graduation to a Top-Level Project allows companies like ours to operate a hybrid architecture without complexity. We look forward to continuing to contribute to its success." "We are already seeing the many benefits of Apache Kudu. In fact we're using its combination of fast scans and fast updates for upcoming releases of our risk solutions," said Cory Isaacson, CTO at Risk Management Solutions, Inc. "Kudu is performing well, and RMS is proud to have contributed to the project’s integration with Apache Spark." "The Internet of Things, cybersecurity and other fast data drivers highlight the demands that real-time analytics place on Big Data platforms," said Arvind Prabhakar, Apache Software Foundation member and CTO of StreamSets. "Apache Kudu fills a key architectural gap by providing an elegant solution spanning both traditional analytics and fast data access. StreamSets provides native support for Apache Kudu to help build real-time ingestion and analytics for our users." "Graduation to a Top-Level Project marks an important milestone in the Apache Kudu community, but we are really just beginning to achieve our vision of a hybrid storage engine for analytics and real-time processing," added Lipcon. "As our community continues to grow, we welcome feedback, use cases, bug reports, patch submissions, documentation, new integrations, and all other contributions." The Apache Kudu project welcomes contributions and community participation through mailing lists, a Slack channel, face-to-face MeetUps, and other events. Catch Apache Kudu in action at Strata + Hadoop World, 26-29 September 2016 in New York.  Availability and Oversight Apache Kudu software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For project updates, downloads, documentation, and ways to become involved with Apache Kudu, visit http://kudu.apache.org/ , @ApacheKudu, and http://kudu.apache.org/blog/. About the Apache Incubator The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organizations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/ About The Apache Software Foundation (ASF) Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 5,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, OPDi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF © The Apache Software Foundation. "Apache", "Kudu", "Apache Kudu", "Drill", "Apache Drill", "Hadoop", "Apache Hadoop", "Apache Impala (incubating)", "Spark", "Apache Spark", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners. # # # [Less]