I Use This!
Very High Activity

News

Analyzed 1 day ago. based on code collected 10 months ago.
Posted 2 days ago by elserj
This weekend, the Apache Pig project released version 0.13.0. Of note to Accumulo, Pig 0.13.0 is the first version of Apache Pig which includes direct support for interaction with Accumulo. A full reference document regarding AccumuloStorage is also ... [More] available on the official Apache Pig documentation. I hope to cover the high-level support that's included and how someone familiar with Accumulo might be able to leverage Pig to more easily solve problems. Pig uses a language known as Pig Latin to interact with data. Each statement in a Pig Latin script accepts a relation and returns another relation. A statement may also include an expression or schema. The schema has a number of primitive types (int, long, datetime, chararray, etc) in addition to other composite types (tuples, bags and maps). Each record in Pig is a tuple, in which each element in that Tuple is a Field which is one of the aforementioned types (a primitive or composite type). Data Representation When using AccumuloStorage, each record in Pig is an Accumulo row. The fields inside a Pig record map to columns within those Accumulo rows. These are dynamic and are required with each use of AccumuloStorage. Of note, multiple relations can be used against the same Accumulo table with different columns. These provide Pig with the means for efficient computation without any unnecessary overhead. Prefixes over column families or qualifiers can be used to construct Pig Maps automatically. This allows for easy grouping of data from Accumulo in Pig. Connecting to Accumulo Pig includes an interface known as a StorageHandler which allows for any custom implementation of a backing store to be using by Pig. This is the interface which we can use to read and write to Accumulo. Like any connection to Accumulo, we need four basic parameters: the instance name, ZooKeeper quorum, username and password. Additionally, a mapping for each field in a relation to columns within Accumulo. Column Mappings A column mapping is the other necessary parameter required when interacting with Accumulo. The purpose is two fold. First, the mapping defines which Accumulo columns in each row will map to which Pig field and it also defines the type of each field. This is specified as the first argument passed to AccumuloStorage. The argument is a comma-separated string. Each element is a column family, and optionally, column qualifier pair, colon-separated (e.g. cf[:cq]). The second purpose maps each element to the Pig schema for the given relation. [Less]
Posted 2 days ago by dlmarion
From unit testing your application to debugging locally to testing at scale, Accumulo contains several tools for testing. Unit Testing - Mock Accumulo Integration Testing - Mini Accumulo Cluster Testing Locally - Fake ... [More] Shell - Scripts to load data and execute client code Testling at scale - clone a table - turn off compactions to ensure that no new files are created - set classpath context to test new versions of iterators [Less]
Posted 2 days ago by Sally
Happy Friday! Let's review what the Apache community has been up to over the past week: Support Apache –help keep Apache software for everyone. - Two weeks remain to make a tax-deductible donation to the ASF in 2018! 750+ generous contributors ... [More] donated $80K+ as part of our Individual Giving campaigns. Every dollar counts! http://donate.apache.org ASF Board –management and oversight of the business affairs of the corporation in accordance with the Foundation's bylaws. - Next Board Meeting: 19 December. Board calendar and minutes http://apache.org/foundation/board/calendar.html ApacheCon™ –the ASF's official global conference series, now in its 20th year. - CFP Now Open: Apache Roadshow Chicago 13-14 May 2019 http://apachecon.com/chiroadshow19/index.html - Save the Date: ApacheCon North America 2019 will take place 9-13 September in Las Vegas http://apachecon.com/ ASF Infrastructure –our distributed team on three continents keeps the ASF's infrastructure running around the clock. - 7M+ weekly checks yield grand performance at 99.64% uptime. http://status.apache.org/ Apache Code Snapshot –this week, 469 Apache contributors changed 968,811 lines of code over 2,704 commits. Top 5 contributors, in order, are: Mark Robert Miller, Mark Thomas, Andrew Purtell, Andrea Cosentino, and Matteo Merli. Apache Beam™ –an Open Source unified programming model to define and execute Big Data processing pipelines. - Apache Beam 2.9.0 released https://beam.apache.org/ Apache Calcite™ Avatica –a framework for building database drivers. - Apache Calcite Avatica 1.13.0 released https://calcite.apache.org/ Apache Geode™ –a Big Data management platform that provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing. - Apache Geode 1.8.0 released http://geode.apache.org/ Apache Griffin™ –Open Source Big Data quality solution. - The Apache Software Foundation Announces Apache® Griffin™ as a Top-Level Project https://s.apache.org/n21m Apache Gobblin (incubating) –a distributed data integration framework that simplifies common aspects of Big Data integration. - Apache Gobblin (incubating) 0.14.0 released https://gobblin.apache.org/ Apache Groovy™ –a multi-faceted programming language for the JVM. - Apache Groovy 2.4.16 released http://groovy.apache.org Apache Hivemall (incubating) –a scalable machine learning library implemented as Hive UDFs/UDAFs/UDTFs.  - Apache Hivemall 0.5.2-incubating released http://hivemall.incubator.apache.org/ Apache Jackrabbit™ –a fully compliant implementation of the Content Repository for Java(TM) Technology API, version 2.0 (JCR 2.0) as specified in the Java Specification Request 283 (JSR 283). - Apache Jackrabbit 2.19.0 released http://jackrabbit.apache.org/ Apache Log4j™ Audit –a framework for performing audit logging using a predefined catalog of audit events. - Apache Log4j-Audit 1.0.1 released http://logging.apache.org/ Did You Know?  - Did you know that the following Apache projects are celebrating anniversaries this month? Apache Portable Runtime/APR (18 years); Logging Services (15 years); Cayenne, OFBiz, and Tiles (12 years); Synapse (11 years); Camel (10 years); Axis, OpenWebBeans, and Pivot (9 years); Aries (8 years); Flex (6 years); Helix (5 years); Falcon and Flink (4 years); Beam and Eagle (2 years); and Trafodion (1 year)? https://projects.apache.org/committees.html?date  - Did you know that Jet.com uses Apache TinkerPop Gremlin to enable help center agents to view and edit help center content across all their communication channels? http://tinkerpop.apache.org/  - Did you know that Dremio recently donated the Gandiva Initiative code base to Apache Arrow? Improved efficiency and performance for analytics, machine learning, and data science on Arrow data structures! http://arrow.apache.org/ Apache Community Notices:  - ASF Operations Summary: Q2 FY2019 https://s.apache.org/d2Fq  - ASF Annual Report for FY2018 https://s.apache.org/FY2018AnnualReport  - The Apache Software Foundation 2018 Vision Statement https://s.apache.org/zqC3  - Foundation Statement –Apache Is Open. https://s.apache.org/PIRA  - "Success at Apache" focuses on the processes behind why the ASF "just works". https://blogs.apache.org/foundation/category/SuccessAtApache  - Please follow/like/re-tweet the ASF on social media: @TheASF on Twitter and on LinkedIn at https://www.linkedin.com/company/the-apache-software-foundation  - Do friend and follow us on the Apache Community Facebook page https://www.facebook.com/ApacheSoftwareFoundation/and Twitter account https://twitter.com/ApacheCommunity  - The list of Apache project-related MeetUps can be found at http://events.apache.org/event/meetups.html  - Flink Forward China will take place 21-22 December 2018 in Beijing https://china-2018.flink-forward.org/call-for-presentations-submit-talk/  - The Apache Big Data community will be at DataWorks Summit 18-21 March 2019 in Barcelona and 20-23 May 2019 in Washington DC https://dataworkssummit.com/  - Future dates for Spark + AI Summit 2019 announced: 23-25 April/San Francisco and 15-17 October/Amsterdam https://databricks.com/sparkaisummit/  - Block your calendars for ApacheCon North America: taking place in September 2019; announcing dates and details soon. http://apachecon.com/  - Find out how you can participate with Apache community/projects/activities --opportunities open with Apache HTTP Server, Avro, ComDev (community development), Directory, Incubator, OODT, POI, Polygene, Syncope, Tika, Trafodion, and more! https://helpwanted.apache.org/  - Are your software solutions Powered by Apache? Download & use our "Powered By" logos http://www.apache.org/foundation/press/kit/#poweredby = = = For real-time updates, sign up for Apache-related news by sending mail to announce-subscribe@apache.org and follow @TheASF on Twitter. For a broader spectrum from the Apache community, https://twitter.com/PlanetApache provides an aggregate of Project activities as well as the personal blogs and tweets of select ASF Committers. # # # [Less]
Posted 4 days ago by Denis Magda
Deep Learning With TensorFlow Even though it was natural to provide machine learning algorithms in Ignite out of the box, another direction was taken for deep learning capabilities. Primarily because machine learning approaches have already been ... [More] adopted in businesses from big to small -- while deep learning is still being used for narrow and specific use cases. Thus, Ignite 2.7 can boast about an official integration with TensorFlow deep learning framework that gives a way to use Ignite as a distributed storage for TensorFlow calculations. With Ignite, data scientists can store unlimited data sets across a cluster, gain performance improvements and rely on fault-tolerance of both products if an algorithm fails in the middle of an execution. Extended Languages Support - Node.JS, Python, PHP Java, .NET and C++ have been extensively supported by Ignite for a while now. But until now, when it came to other languages, developers had to fall back to REST, JDBC/ODBC calls. To address the limitation of missing native APIs for programming languages different from the three above, the community released a low-level binary protocol used to build thin clients. A thin client is a lightweight Ignite client that connects to the cluster via a standard socket connection. Based on this protocol, Ignite 2.7 adds support for Node.JS, Python and PHP. As for Java, .NET and C++, you can leverage from thin clients, as well, if the regular clients are not suitable for some reason. Transparent Data Encryption For those of you who are using Ignite persistence in production, this functionality brings peace of mind. Whether you store any sensitive information -- or an entire data set has to be encrypted due to regulations -- this feature is what you need. Check this page for more details. Transactional SQL Beta Last, but probably the most anticipated addition to Ignite, is fully transactional SQL. You're no longer limited to key-value APIs if an application needs to run ACID-compliant distributed transactions. Prefer SQL? Use SQL! Yes, it's still in beta and might not yet be the best fit for mission-critical deployments, but definitely try it in your development cycles and share your feedback. It took us several years to reach this milestone and before GA release comes out, we want to hear what you think. Finally, I have no more paper left to cover other optimizations and improvements. So, go ahead and check out our release notes. [Less]
Posted 4 days ago by khmarbaise
The Apache Maven team is pleased to announce the release of the Apache Maven Jar Plugin, version 3.1.1. This plugin provides the capability to build jars. Important Note: Maven 3.X only JDK 7 minimum requirement 1 2 3 4 5 ... [More] org.apache.maven.plugins maven-jar-plugin 3.1.1 Release Notes – Maven JAR Plugin – Version 3.1.1 Bug: MJAR-241 – Jar package does not have a size in ZipEntry Improvement: MJAR-260 – Upgrade to Archiver 3.3.0 and add ITs Task: MJAR-251 – Add documentation information for GitHub Dependency upgrades: MJAR-252 – Upgrade plexus-archiver to 3.6.0 MJAR-255 – Upgrade maven-plugins parent to version 32 MJAR-256 – Upgrade JUnit to 4.12 MJAR-261 – Upgrade plexus-archiver 3.7.0 Enjoy, The Apache Maven team [Less]
Posted 4 days ago by khmarbaise
The Apache Maven team is pleased to announce the release of the Apache Maven Help Plugin, version 3.1.1 The Maven Help Plugin is used to get relative information about a project or the system. It can be used to get a description of a particular ... [More] plugin, including the plugin’s goals with their parameters and component requirements, the effective POM and effective settings of the current build, and the profiles applied to the current project being built. Important Notes since Version 3.0.0 Maven 3+ only JDK 7 minimum requirement You should specify the version in your project’s plugin configuration: 1 2 3 4 5 org.apache.maven.plugins maven-help-plugin 3.1.1 You can download the appropriate sources etc. from the download page. Release Notes – Maven Help Plugin – Version 3.1.1 Improvement: MPH-154 – The output of the plugin should be flushed when using forceStdout Dependency upgrades: MPH-153 – Upgrade maven-plugins parent to version 32 MPH-156 – Upgrade maven-artifact-transfer to 0.10.0 MPH-157 – Upgrade plexus-interactivity-api 1.0-alpha-6 MPH-158 – Upgrade xstream 1.4.11.1 MPH-159 – Upgrade JUnit 4.12 Enjoy, -The Apache Maven team [Less]
Posted 4 days ago by khmarbaise
The Apache Maven team is pleased to announce the release of the Apache Shared Component: Apache Maven Dependency Analyzer Version 1.11.0 Analyzes the dependencies of a project for undeclared or unused artifacts. 1 2 3 4 5 ... [More] org.apache.maven.shared maven-dependency-analyzer 1.11.0 Release Notes Improvements: MSHARED-770 – Upgrade org.ow2.asm:asm to 7.0 MSHARED-780 – Add GitHub Informations. Dependency upgrades: MSHARED-776 – Upgrade maven-shared-components to 33 MSHARED-779 – Upgrade maven-invoker to 3.0.1 Enjoy, -The Maven team Karl-Heinz Marbaise [Less]
Posted 5 days ago by Sally
Open Source Big Data quality solution in use at eBay, Expedia, Huawei, JD.com, Meituan, PayPal, Pingan Bank, PPDAI, VIP.com, VMWare, and more. Wakefield, MA —12 December 2018— The Apache Software Foundation (ASF), the all-volunteer ... [More] developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today Apache® Griffin™ as a Top-Level Project (TLP). Apache Griffin is a robust Open Source Big Data quality solution for distributed data systems at any scale. It provides a unified process to measure data quality from different perspectives, as well as building and validating trusted data assets in both streaming or batch contexts. Griffin originated at eBay and entered the Apache Incubator in December 2016. "We are very proud of Griffin reaching this important milestone," said William Guo, Vice President of Apache Griffin. "By actively improving Big Data quality, Griffin helps build trusted data assets, therefore boosting your confidence in your business."  Apache Griffin enables data scientists/analysts to handle data quality issues by: Defining –specifying data quality requirements such as accuracy, completeness, timeliness, profiling, etc.; Measuring –source data ingested into the Griffin computing cluster will apply data quality measurement based on user-defined requirements; and Applying Metrics –data quality reports as metrics will be exported to designated destination. In addition, Griffin allows users to easily onboard new requirements into the platform and write comprehensive logic to further define their data quality.  Apache Griffin is in use in high volume, high demand environments at 163.com/Netease, eBay, Expedia, Huawei, JD.com, Meituan, PayPal, Pingan Bank, PPDAI, VIP.com, and VMWare, among others. "eBay contributed Griffin to the Apache Incubator in December 2016 to ensure its future development in a community-driven manner. It started with the idea on how eBay could address the data quality issue across multiple systems, especially in streaming context," said Vivian Tian, VP of eBay, GM - China Center of Excellence. "Griffin brings data quality solution to data ecosystem and ensure data applications have a solid quality foundation. We are extremely happy to see Griffin graduate as an Apache Top Level Project, and look forward to continued innovation and collaboration with the Apache community." "We have been using Apache Griffin for about two years, monitoring 1000+ tables with data quality metrics, and are very happy to see it graduate to a Top-Level Project," said Chao Zhu, Senior Director at VIPshop Finance. "Apache Griffin and its data quality DSL can help us easily identify data quality issues instantly on our big data platform. In addition, Apache Griffin's architecture is highly extensible. We are looking forward to using it in real time data quality management system. We also look forward to contribute some of our minor enhancement to Griffin back to the community." "We appreciate the Griffin project which really helps so much in our daily data jobs.After years of struggling with the complexity of data quality issues, we turned to Apache Griffin for a new platform that would simplify our data quality pipeline," said Jianfeng Liu, Director of Real-time Data Department at PPDAI. "Because of Apache Griffin's unified model for both batch and stream processing, we've been able to replace legacy systems with one solution that works seamlessly in our production environment. Griffin DSLs have allowed us to dramatically simplify our pipeline and to reduce our efforts a lot. I'm very proud and excited to see that the project is graduating." "Apache Griffin is one of the best data quality solutions which my team has been used so far. It has been an exciting journey seeing the Griffin community evolve rapidly. And many people iteratively adopting it and contributing to newer capabilities," said Austin Sun, Senior Engineering Manager, Enterprise Service Platform at PayPal. "In PayPal risk domain, we benefit a lot from Apache Griffin to provide high quality data to make precise decision and protect our customer. In addition to PayPal risk, I knew there are several other corporates also leverages core capability from Griffin as their data quality solution. It’s my great honor to witness Griffin grows to a top level project. Way to go, Griffin." "Apache Griffin project is yet another showcase how community over code can work for projects coming out from internal usages of companies into the open source," said Henry Saputra, ASF member and Incubator Mentor for Apache Griffin. "I am proud to be the part of the projects and mentors for the project when it was being contributed from eBay, in addition to several other projects already donated to ASF such as Apache Kylin and Eagle. The team has worked tremendously hard to adapt the Apache Way, and also shown great respect for the open source community in all the processes for design, development, and release processes.As a Top-Level Project I believe the PMC will help lead the project to much more success in the future." "Graduation is not the end, it is the beginning of another journey. We hope to take Apache Griffin to the next level with a wider set of features and users," added Guo. "We welcome anyone to join our efforts by helping with product design, documentation, code, technical discussions or promoting Apache Griffin in The Apache Way." Availability and Oversight Apache Griffin software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Griffin, visit http://griffin.apache.org/ and https://twitter.com/apachegriffin About The Apache Software Foundation (ASF)Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 730 individual Members and 6,800 Committers across six continents successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Aetna, Alibaba Cloud Computing, Anonymous, ARM, Baidu, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Facebook, Google, Handshake, Hortonworks, Huawei, IBM, Indeed, Inspur, LeaseWeb, Microsoft, Oath, ODPi, Pineapple Fund, Pivotal, Private Internet Access, Red Hat, Target, Tencent, and Union Investment. For more information, visit http://apache.org/ and https://twitter.com/TheASF © The Apache Software Foundation. "Apache", "Griffin", "Apache Griffin", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners. # # # [Less]
Posted 9 days ago by Sally
Greetings, December: counting down to the end of the calendar year. All the while, the brilliant Apache community remains productive: Support Apache –help keep Apache software for everyone. - Join 730+ generous contributors who have donated nearly ... [More] $80,000 to the ASF as part of our Individual Giving campaigns. Giving to the ASF feels great and is so easy --every dollar counts! http://donate.apache.org Success at Apache –a monthly blog series that focuses on the processes behind why the ASF "just works". - Success at Apache: Cookie Monster by Isabel Drost-Fromm https://s.apache.org/cnSe ASF Board –management and oversight of the business affairs of the corporation in accordance with the Foundation's bylaws. - Next Board Meeting: 19 December. Board calendar and minutes http://apache.org/foundation/board/calendar.html ApacheCon™ –the ASF's official global conference series, now in its 20th year. - SAVE THE DATE: ApacheCon North America 2019 will take place 9-13 September in Las Vegas http://apachecon.com/ ASF Infrastructure –our distributed team on three continents keeps the ASF's infrastructure running around the clock. - 7M+ weekly checks yield impressive performance at 99.53% uptime. http://status.apache.org/ Apache Code Snapshot –this week, 470 Apache contributors changed 835,412 lines of code over 3,025 commits. Top 5 contributors, in order, are: Jean-Baptiste Onofré, Andrea Cosentino, Andrzej Kaczmarek, Jan Piotrowski, and Tilman Hausherr. Apache Bahir™ –provides extensions to multiple distributed analytic platforms, such as Apache Spark, to extend their reach with diverse streaming connectors and SQL data sources. - Apache Bahir 2.3.2 released http://bahir.apache.org Apache BookKeeper™ –a scalable, fault-tolerant, and low-latency storage service optimized for real-time workloads. - Apache BookKeeper 4.8.1 released https://bookkeeper.apache.org Apache CouchDB™ –Open Source NoSQL document database using HTTP, JSON, and MapReduce. - Apache CouchDB 2.3.0 released https://couchdb.apache.org/ Apache Crail (incubating) –a high-performance distributed data store designed for fast sharing of ephemeral data in distributed data processing workloads. - Apache Crail 1.1-incubating released https://crail.incubator.apache.org/ Apache HBase™ –Open Source, distributed, versioned, non-relational database. - Apache HBase 2.0.3 released https://hbase.apache.org/ Apache Impala™ –a high-performance distributed SQL engine. -  Apache Impala 3.1.0 released https://impala.apache.org/ Apache Ignite™ –a memory-centric distributed database, caching, and processing platform for transactional, analytical, and streaming workloads delivering in-memory speeds at petabyte scale. - Apache Ignite 2.7.0 Vulnerable Dependecies Updates http://mail-archives.apache.org/mod_mbox/www-announce/201812.mbox/%3CCALUCNEsCwE0fC2XCHi996%3DOdUCZZLK8WzF2KOdaLPYkZzWE_8A%40mail.gmail.com%3E Apache Jackrabbit™ –a fully compliant implementation of the Content Repository for Java(TM) Technology API, version 2.0 (JCR 2.0) as specified in the Java Specification Request 283 (JSR 283). - Apache Jackrabbit 2.18.0 and Jackrabbit Oak 1.9.12 released http://jackrabbit.apache.org/ Apache Kylin™ –an Open Source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Apache Hadoop, supporting extremely large datasets. - Apache Kylin 2.5.2 released https://kylin.apache.org/ Apache PDFBox™ –an Open Source Java tool for working with PDF documents. - Apache PDFBox 2.0.13 released http://pdfbox.apache.org/ Apache PLC4X (incubating) –a set of libraries for communicating with industrial programmable logic controllers (PLCs) using a variety of protocols but with a shared API. - Apache PLC4X 0.2.0 released http://plc4x.apache.org Apache POI™ –Java library for reading and writing Microsoft Office file formats, such as Excel, PowerPoint, Word, Visio, Publisher and Outlook. - Apache POI 4.0.1 released https://poi.apache.org/ Apache Qpid™ –the latest release of the newer JMS client supporting the Advanced Message Queuing Protocol 1.0 (AMQP 1.0, ISO/IEC 19464), based around the Apache Qpid Proton protocol engine and implementing the AMQP JMS Mapping as it evolves at OASIS. - Apache Qpid JMS 0.39.0 released http://qpid.apache.org/ Apache ServiceComb™ –a microservice framework that provides a set of tools and components to make Cloud application development and deployment easier. - Apache ServiceComb Service-Center 1.1.0, ServiceComb Saga 0.2.1, and Java-Chassis 1.1.0 released http://servicecomb.apache.org/ Apache Tomcat™ Native Library –provides portable API for features not found in contemporary JDKs. - Apache Tomcat Native 1.2.19 released http://tomcat.apache.org/ Apache UIMA™ –supports the community working on the analysis of unstructured information with a unifying Java and C++ framework, tooling, and analysis components, guided by the OASIS UIMA (Unstructured Information Management Architecture) standard. - Apache UIMA Java SDKs and versions 2.10.3 and 3.0.1 released http://uima.apache.org Apache Wicket™ –Open Source Java component oriented Web application framework that powers thousands of applications and sites for governments, stores, universities, cities, banks, email providers, and more. - Apache Wicket 7.11.0 released http://wicket.apache.org/ Did You Know?  - Did you know that Apache Omid (incubating) was selected as the transaction management provider for Apache Phoenix? http://omid.incubator.apache.org/ and http://phoenix.apache.org  - Did you know that you can see a top-level overview of each Apache project, the category they fall under, timelines, evolution, and overall commit history at https://projects.apache.org/ ?  - Did you know that Apache Cayenne is an Open Source Java object-to-relational mapping framework? Check out v4.0 of the "ORM superpower" https://cayenne.apache.org/ Apache Community Notices:  - ASF Operations Summary: Q2 FY2019 https://s.apache.org/d2Fq  - ASF Annual Report for FY2018 https://s.apache.org/FY2018AnnualReport  - The Apache Software Foundation 2018 Vision Statement https://s.apache.org/zqC3  - Foundation Statement –Apache Is Open. https://s.apache.org/PIRA  - "Success at Apache" focuses on the processes behind why the ASF "just works". https://blogs.apache.org/foundation/category/SuccessAtApache  - Please follow/like/re-tweet the ASF on social media: @TheASF on Twitter and on LinkedIn at https://www.linkedin.com/company/the-apache-software-foundation  - Do friend and follow us on the Apache Community Facebook page https://www.facebook.com/ApacheSoftwareFoundation/and Twitter account https://twitter.com/ApacheCommunity  - The list of Apache project-related MeetUps can be found at http://events.apache.org/event/meetups.html  - Flink Forward China will take place 21-22 December 2018 in Beijing https://china-2018.flink-forward.org/call-for-presentations-submit-talk/  - The Apache Big Data community will be at DataWorks Summit 18-21 March 2019 in Barcelona and 20-23 May 2019 in Washington DC https://dataworkssummit.com/  - Future dates for Spark + AI Summit 2019 announced: 23-25 April/San Francisco and 15-17 October/Amsterdam https://databricks.com/sparkaisummit/  - Block your calendars for ApacheCon North America: taking place in September 2019; announcing dates and details soon. http://apachecon.com/  - Find out how you can participate with Apache community/projects/activities --opportunities open with Apache HTTP Server, Avro, ComDev (community development), Directory, Incubator, OODT, POI, Polygene, Syncope, Tika, Trafodion, and more! https://helpwanted.apache.org/  - Are your software solutions Powered by Apache? Download & use our "Powered By" logos http://www.apache.org/foundation/press/kit/#poweredby = = = For real-time updates, sign up for Apache-related news by sending mail to announce-subscribe@apache.org and follow @TheASF on Twitter. For a broader spectrum from the Apache community, https://twitter.com/PlanetApache provides an aggregate of Project activities as well as the personal blogs and tweets of select ASF Committers. # # # [Less]
Posted 9 days ago by christ
[IF YOUR PROJECT DOES NOT HAVE GIT REPOSITORIES ON GIT-WIP-US PLEASE DISREGARD THIS POST]Hello Apache projects,I am writing to you because you may have git repositories on the git-wip-us server, which is slated to be decommissioned in the coming ... [More] months. All repositories will be moved to the new gitbox service which includes direct write access on github as well as the standard ASF commit access via gitbox.apache.org. Why this move?The move comes as a result of retiring the git-wip service, as the hardware it runs on is longing for retirement. In lieu of this, we have decided to consolidate the two services (git-wip and gitbox), to ease the management of our repository systems and future-proof the underlying hardware. The move is fully automated, and ideally, nothing will change in your workflow other than added features and access to GitHub. Timeframe for relocationInitially, we are asking that projects voluntarily request to move their repositories to gitbox. The voluntary time frame is between now and January 9th 2019, during which projects are free to either move over to gitbox or stay put on git-wip. After this phase, we will be requiring the remaining projects to move within one month, after which we will move the remaining projects over.To have your project moved in this initial phase, you will need: Consensus in the project (documented via the mailing list) File a JIRA ticket with INFRA to voluntarily move your project repos over to gitbox (as stated, this is highly automated and will take between a minute and an hour, depending on the size and number of your repositories) To sum up the preliminary timeline; December 9th 2018 -> January 9th 2019: Voluntary (coordinated) relocation January 9th -> February 6th: Mandated (coordinated) relocation February 7th: All remaining repositories are mass migrated This timeline may change to accommodate various scenarios. Using GitHub with ASF repositoriesWhen your project has moved, you are free to use either the ASF repository system (gitbox.apache.org) OR GitHub for your development and code pushes. To be able to use GitHub, please follow the primer at: https://reference.apache.org/committer/github We appreciate your understanding of this issue, and hope that your project can coordinate voluntarily moving your repositories in a timely manner.All settings, such as commit mail targets, issue linking, PR notification schemes etc will automatically be migrated to gitbox as well. [Less]