jump to navigation

Incredible. Things Are Different Now February 26, 2016

Posted by Marty Wolfe in Uncategorized.
add a comment

It’s not just a robot that BostonDynamics has demonstrated, it’s something that can take over a huge number of jobs and roles that humans perform today.  I was just painting a room in my house and was frustrated at not getting the borders perfectly painted, this could eventually achieve mundane tasks like this and change the face of war and manufacturing at the same time.  Amazing.

Atlas Robot

 

Advertisements

Governance for IoT, Hybrid Cloud, and Microservices January 8, 2016

Posted by Marty Wolfe in cloud, Hybrid Cloud, infrastructure, Netcentric, SOA, Uncategorized.
Tags: , , , , , , ,
add a comment

Back in June 2015, I described an approach to managing and governing a Hybrid Cloud environment.  It really is the last thing we think about. Just deploy a bunch of applications and data “into the Cloud” (whatever that means.. well it usually means “off premise” or “somewhere else other than in the data centers I have been using for a while”).

Coming this year, IBM is publishing a book on Hybrid Cloud and I have written the chapter on something called the Hybrid Governance Fabric.  A set of important characteristics and decisions in changing existing governance or establishing some kind of governance in the first place.  Since my original blog post, I have done several deployments of governance in large enterprises and wanted to share more thoughts on this.

Please keep an eye out for the book in mid 2016 (or let me know and I’ll send you a link when it’s published).  Take a look at the set of topics in the chapter.. and tell me what you think! Screen Shot 2016-01-08 at 10.32.27 AM

There really are a lot of parallels between this Hybrid Cloud deployment model and how we conduct our lives everyday.  Thinking about it more, spreading applications, software systems, and data sources across many different physical environments and using different technologies is really (or nearly) the same as what we now call the Internet of Things (IoT).  The need to technically and mechanically tie these components, microservices, and systems together is vitally important and it’s something that everyone is working on, as evidenced by the huge number of different platforms out there.

There are two important factors in actually determining how to make this combined set of components actually operable and usable in ways that ensure security, quality, and maintainability.

  1. The ability to leverage data and analytics to determine the best and most optimized combinations of services and “things”.  This is where not just analytics but Cognitive capabilities, like IBM’s Watson platform are key to making this work
  2. To ensure the reliability and maintainability of such a menagerie of interconnected systems, locations, people, and components it’s important to govern and manage these systems.  When you combine systems of engagement, systems of record, and systems of insights figuring out the location of your data or the root cause of problems gets just that more complicated.

So.. when you combine all of these services together, when they have to be interconnected and you have to protect your data, governance and visibility are key.

Keep a lookout for the book and your comments and thoughts are welcome.

 

My Journey with SDR and the “Noise” In Our World November 15, 2015

Posted by Marty Wolfe in sdr, Uncategorized.
add a comment

Just got started experimenting with one of these inexpensive SDR devices.  You plug it into the USB port (preferably USB 2.0) and run some pretty decent free software.  Typically you can get these SDR devices for about $20 – $30 and they come with a small multi-band antenna.  What’s really cool is the wide range in which you can tune these devices and the ability to decode the huge amount of digital data being broadcast across the spectrum.

So I purchased one of these devices.. hooked it up to an existing mobile antenna I had for Ham Radio and could easily hear many local FM radio stations.. quite well.. it even decodes the text giving the name of the song.

Actually I got interested in this since there’s a real security issue here.  Many of the devices and appliances we have in the enterprise and at home are leaking a ton of EM signals.  It’s really fascinating to see these traditionally invisible signals and discern their meaning.  Thus the interest in all this SDR work.

Once I got started, I realized that my current setup really wasn’t good at picking up all the signals I really wanted to dig into, especially given all the trees that surround half my yard (… trees typically absorb high frequency signals).

I recently updated my setup with a better antenna, better antenna connectors, and some better cables.  Plus I downloaded some additional software to be able to track ADS-B transponder signals.

I thought I would start with listening to unique signals.. ones that are hard to actually hear, and ones which have to be decoded.  So I chose these ADS-B transponder signals which are broadcast directly from commercial aircraft.

Here’s the list of parts..

  1. An SDR dongle – the “RTL-SDR Blog R820T2..” got it from this link >> http://www.amazon.com/gp/product/B0129EBDS2?psc=1&redirect=true&ref_=oh_aui_detailpage_o07_s00
  2. A better antenna.. I bought a Diamond 130NJ “discone” antenna.  Wide band reception.  Bought it from DX Engineering.. free shipping.. was a competitive price >> http://www.dxengineering.com/parts/dmn-d130nj
  3. Really good coax.. as low loss as is reasonable .. I bought some of this 400MAX coax, better than RG216.. and I bought 50 ft.  Also from DX Engineering, since they let you construct the cable with a quick online tool.  Worked great.  Got N-connectors on each end, much lower loss than PL-259 or SMA.
  4. Downloaded RTL1090 software (http://rtl1090.web99.de) and Virtual Radar Server (http://www.virtualradarserver.co.uk).  There’s a great “installer” that you can download from the RTL1090 site that will configure nearly everything.

A good video on setup and configuration is here: https://www.youtube.com/watch?v=UT8LO7hM640

Here are some pics..

[1] Once I got all the setup complete, here’s a screenshot of tracking planes directly from their transponders on 1090Mhz.  You can see RTL1090 running in the foreground and Virtual Radar Server running in the background (doing real time display of planes being received):

SDR - ADSB - Tracking Software.png

[2] This is what the setup actually looks like.. you can see the 400Max cable snaking out the window frame.. it’s connected to a $20 RTL-SDR dongle (just to the right and below the laptop.. hard to see).  That all plugs into a very slow Acer laptop running Windows 10 plus Virtual Radar Server and RTL1090.

SDR - ADSB - Workstation.jpg

[3] I originally used an Amateur Radio dual band antenna.. typically used on your car with a magnetic mount.. that covers 440Mhz (70cm) and 140Mhz (2m) bands.  That worked “ok”, but then I purchased a Discone style antenna which has a very large coverage space.. like 20Mhz to 2Ghz.. it really worked well.  I haven’t mounted it permanently yet, so for now I let it sit on top of my car, only when I’m using it of course and not driving! (since it’s a decent ground plane).  Working well so far.

SDR - Antenna.jpg

Using EM Fields To Detect Everyday Objects. November 10, 2015

Posted by Marty Wolfe in Uncategorized.
add a comment

A really great combination of SDR technology and EM fields.

There was a great TED talk on similar ideas and lots of content on Youtube about the use of inexpensive SDR technology to detect the “leakage” of EM fields from all kinds of devices.  This Disney video is the most interesting version I have seen to date.

“Wanderers” narrated by Carl Sagan July 3, 2015

Posted by Marty Wolfe in Uncategorized.
add a comment

A really inspiring piece of work

Hybrid Governance Fabric June 4, 2015

Posted by Marty Wolfe in Uncategorized.
1 comment so far

The Rapidly Emergent Hybrid Environment

One of the rapidly emerging use cases, especially for those enterprises with fairly complex IT topologies, is using services from various external providers in conjunction with their existing systems.

A good example of this is with the Internet of Things (IOT) where various sensors, devices, mobile apps (systems of engagement) and legacy information sources (systems of record) are combined.  Many of the key characteristics of Services Oriented Architecture (SOA) apply here, but now in a geographically distributed and cross-provider model.

Now What?

So how do you manage all of this?

You can imagine the issues here, and in the increasing complexity of this truly distributed system of “pieces”.

How can we track the movement of data and track down the various issues that arise when these types of heterogeneous systems are used to build applications for the business.  The data is moving back and forth on and off-premise and the number of integrations between systems are abnormally higher than a single system.  The composition of these business applications makes it really much harder and more complex to manage changes, updates to releases, incidents and issues as well as ensuring security and compliance.

So What?  What’s The Point?

There is so much energy and activity on the deployment of Hybrid IT, Hybrid Cloud and the whole move to consuming services in a utility model.  Regardless of how you want to name the approach, the truth about Cloud is there will always be some “systems” which remain on-premise and some that are off-premise.  You could argue that there are some organizations and enterprises which will use only those capabilities which they don’t own.  In this case everything will be consumed as a utility.  However, for every case where you would say “there are no IT systems or data they will have on premise”, I can counter that there is at least one scenario where some piece of enterprise collateral will remain on their premise, at their site and/or in their possession.

So, here’s the point..

All enterprises and organizations, regardless of size, will retain some data or technical component “on their premise” and thus governance is needed to track, check, and maintain this cross-premise deployment of data, business logic, and applications.  They will build more and more of these applications which integrate various components and data sources, both on and off-premise (take a look at this article)

The Important Focus Areas

If you can make these two capabilities work effectively then Hybrid Cloud will work for your enterprise:

  1. “Hybrid” Root Cause Analysis (HRCA)
  2. Data Lineage and Traceability (DLT)

These are really the most important areas to get up and running.  When going down the path of Hybrid Cloud and wanting it to actually work in real-life scenarios it’s as simple as making sure you can do those two things.. and do them well.

Establishing an interconnected set of processes across many different providers and services allows for effective root cause analysis and to know the path of your data are key.

This interconnected set of processes is something I am calling a “governance fabric”.  It is a single approach to managing the movement of data between service providers, different services, data sources, and heterogeneous infrastructures.

Hybrid Root Cause Analysis (HRCA)

One of the most challenging aspects of Hybrid Cloud, where many different components, systems, and data sources are stitched together to form a single business function is to determine how to resolve functional and delivery issues as they arise.  Hopefully good design of each “service” lessens the possibility of functional or operational problems.

The key aspect is to create incident, problem, and change management processes that take into account having a heterogeneous set of functionality, service providers and methods of connectivity (APIs, Middleware, and Network infrastructure).  There are early aspects of this described in early work done in the SOA Governance and Maturity Model (SGMM), but here are the main points:

  • Incident and Problem Management need to take into account not just functional issues but the interconnections between those systems, the various APIs being used, and the movement of data and how it may have changed as it moves between services and service providers.  Handling and identifying an incident that likely spans multiple components in multiple locations is the key here.
  • Change Management is the place where leveraging DevOps and Continuous Integration techniques are really valuable.   It’s vitally important that both building services and the integration of those services is merged into the overall process of operations governance.

Data Lineage and Traceability (DLT)

The most important IT capability is not servers, networks, services or databases, but it’s the data that is most important. This is the most interesting and the most cutting edge aspect of establishing common governance. The ability to know the location, status, and security of data is vital as it moves between services and components especially from on-premise to off-premise locations.  To assure the integrity of the data is key and thus knowing the linage of the data and have the ability to trace and track its movement across systems is key to being able to adequately govern data which is, above and beyond all other things, the most important asset an enterprise or organization owns.

Establishing common governance in a Hybrid Cloud model has turned out to be the most important aspect to going beyond just deploying tools for Hybrid Cloud and consuming Cloud services. Being able to assure the integrity of data as it moves between the components in a Hybrid deployment and integrating with a process to determine the root cause of problems and manage changes is key to successful deployment.

“Watch People Code” February 19, 2015

Posted by Marty Wolfe in Uncategorized.
add a comment

I completely get why this is interesting. That’s because I look at writing code like it’s an art form, so it is interesting watching someone paint and so it’s interesting to watch the creative process to build something. The article focuses on a Russian programmer writing a new Reddit search engine in Python. They thought Twitch was a crazy idea, but it just sold to Amazon for $1B.. but I can see this as even more educational. Watching someone write code and talk about their thought process can be a really effective method on how to learn or improve your skills.

http://motherboard.vice.com/read/thousands-of-people-are-watching-this-guy-code-a-search-engine?trk_source=popular

Looking Back at the “Trash 80” :-) February 7, 2015

Posted by Marty Wolfe in Uncategorized.
add a comment

I actually had the Model III version! .. it had 16KB of memory and you could actually write some amazingly useful software. I mostly used BASIC and it worked great. Software was saved and loaded using magnetic cassette tapes through a consumer grade tape player and recorder. Took about 40 minutes to load the word processor so you could use it and then you couldn’t turn off the computer until you were done as it would dump the memory and the word processor would have to be reloaded. I remember turning in a book report, printed on an Epson MX-80 dot matrix printer, with different fonts and styles (italics, underline, etc) and everyone was totally amazed.

Checkout this walk down memory lane with the 1981 TRS-80 computer catalog.

http://mashable.com/2015/02/06/radio-shack-catalog-1981/?utm_cid=mash-com-Tw-main-link

Completely Amazing Real Time Rendering. February 7, 2015

Posted by Marty Wolfe in Uncategorized.
add a comment

This is a really impressive real-time render of a fake Parisian apartment using the Unreal game engine. Several notable things include a mirror in the bathroom which changes the angle of refraction while the player moves and the glossy table surfaces.

Devices, Bitcoin, and the Internet of Things.. January 26, 2015

Posted by Marty Wolfe in Uncategorized.
add a comment

A really interesting collaboration between IBM and Samsung on creating a digital currency supply chain. The model looks good, but it surely requires a Hybrid network fabric and the ability to actually trace the transactions, but more importantly the data as is moves between systems. The “supply chain of data” is key.

Check out this interesting article on the topic and there are some additional links to supporting materials, also a good read:
https://securityledger.com/2015/01/ibm-and-samsung-bet-on-bitcoin-to-save-iot/