Wednesday, February 25, 2015

Creating and Configuring an Azure Private Infrastructure

With the goal of using and taking advantage of cloud services, we wanted to do what Microsoft Azure folks call a ‘lift and shift’ model for migration for a proof of concept application compatibility. In that context we have picked four local virtual machines instances and their application configuration, with the goal of re-configuring them within Azure’s cloud infrastructure.

We have picked three “Medium” sized instances (dual core, 3.5 GB of RAM, 128 GB HD), and one “Small (A1)” sized (single core, 1.75 GB of RAM, 128 GB HD). We have decided on what the names would, the private IP addresses, and connection configuration amongst the machines.

Since this is a simple deployment with “MVP” (Minimum Viable Product) in mind, we chose not to have a DNS server, rather simply entered each machines’ IP private addresses in each machine’s “hosts” file so that the names would resolve.

Not going to discuss and screen capture the Azure portal settings, but at this time of writing, there are two Azure portals(?), one is the legacy, and the other is the modern with progressive disclosure portal. The legacy portal is definitely more complete, but I ran into instances where I could not take one action, but I could on the other and vices versa. Have to say that Azure team is hard work and I am sure they will get their ducks in a row pretty soon, and things will run smoother.

Basic steps are as follows, and it essential to follow a plan so that it is repeatable by others in your company, take good notes and observe the outcomes on the portal.

  1. Create a storage account within the Azure portal (I used the legacy portal)
  2. Create a virtual network (this picture shows VMs already created.

    blog_azureVNET
  3. Upload the VHD created into Azure. I used Azure Powershell SDK for that. There are also other open source tools for NodeJS which I have been dabbling with, but Powershell seems more natural and complete for a Windows platform.  Perhaps that would change in a few years, but I digress. Oh, before you do that, you need to import your subscription profile and set the current storage account. The instructions to do that are here. Then you’re ready. 

    The command I used is as follows:

    1 Add-AzureVhd –Destination “https://yourstoragename.blob.core.windows.net/vhds/yourVhdFileName.vhd –LocalFilePath “<path_to_local_file>

  4. Depending on big the VHD file is, it may be a while, and this is the time to take dogs for a walk or have lunch/dinner (what have you), since it first creates MD5 hash and uploads the image to your cloud storage.

  5. After a successful upload, you can attach this disk to your VHDs, once they are available.

  6. To create virtual machine on Azure, I prefer to use Azure Powershell SDK again since I want to have control over the names, passwords, IP address and virtual network configuration. Here is my sample script which defines the variables on the top and in a ‘piped’ fashion creates the VMs on Azure.


    1 ## Basic Configuration
    2 $vmName = "testVM"
    3 $svcName = "testVM"
    4 $instanceSize = "Medium"
    5 $location = "Central US"
    6 $labelString = "test web and sql server"
    7
    8 ## Login credentials
    9 $un = "testadmin"
    10 $pwd = "password"
    11
    12 ## Network Variables
    13 $vnet = "test-vnet"
    14 $sub = "test-subnet"
    15 $ip = "10.0.4.4"
    16
    17 ## image name
    18 $image = (Get-AzureVMImage | where-object { $_.Label -like "Windows Server 2012 R2*" -and $_.PublishedDate -eq "12/11/2014 2:00:00 AM"})
    19
    20 New-AzureVMConfig -Name $vmName -InstanceSize $instanceSize -Image $image.ImageName -Label $labelString |
    21 Add-AzureProvisioningConfig -Windows -AdminUserName $un -Password $pwd |
    22 Set-AzureSubnet -SubnetNames $sub |
    23 Set-AzureStaticVNetIP $ip |
    24 New-AzureVM -ServiceName $svcName -Location $location -VNetName $vnet

Created three similar scripts for each VM instance and ran it via the Azure SDK. After a few minutes, your private infrastructure is up and running. Make sure you patch your servers and install your applications.


In order to attach the existing VHD, I recommend using the new portal, since the old one won’t let me choose the container and such, the new portal is the way to go. Below screen capture shows the “Choose a disk” and make sure pick “OK” at the end to persist your changes.


attaching_an_existing_disk


That is about it. You can attach and detach your VHD file across VM instances you created in a few minutes, or you can allow network sharing within the instances you have created and shuffle source code and installation files that way.


In summary, we are seriously considering moving our existing infrastructure to the cloud, either Azure or perhaps another vendor, but doing this takes time, working with our clients, looking at compliance and security aspects. In cloud terms, what we are utilizing is “Infrastructure As A Service” (IaaS), next one is “Platform As A Serivce” (PaaS), and the last step is “Software As A Service” (SaaS), which will really allow us to focus on our code versus maintaining our servers and infrastructure. Yet there is a balance.


A good starting point with Azure, “Fundamentals of Azure” can be downloaded from here.


Happy Cloud Computing!

Monday, October 7, 2013

Arduino Fun Starter

 

It has been over two years since I have got my first Arduino Uno and just this January acquired my first Raspberry Pi. The excitement returned and got another one since I broke the first one’s SSD slot.  Now I have full time two Raspberry Pi’s and two Arduino’s boards kicking in data and doing stuff.

I was presented by an Indoor/Outdoor digital thermometer as a gift. Lucky me! Then on a gloomy weekend which we had a few extra ones this season here in MN, I started looking around to create a basic temperature logging circuitry using Arduino Uno. I ran into this site. http://www.hacktronics.com/Tutorials/arduino-1-wire-tutorial.html

All I needed was DS18B20 digital temperature sensor, a 4.7 kOhm resistor. The circuitry was pretty basic which is exactly what I wanted. Here it is:

CircuitDiagram2

As you can see, the wiring can’t be simpler than this. Here is my set up:

TempCollectionTakeOne

The close up circuitry for the record is this:

CircuitDiagram1

Then I modified the built-in sketch file for the Ethernet shield to accommodate the temperature sensor. As it turns out, you need two Arduino libraries shown below:

TempServerSketchNext you will write a bit of code to publish your reading to the web site the Ethernet shield is serving.

Here it goes – first the setup():

setup

Next the loop() in two images:

looploop2

Apologize these are simple images, but this is really cookie-cutter Ethernet shield code modified. Please type it up and see how it works instead of cloning a piece of code from github or codeplex and contacting me a few years later because it won’t work in 2017. Who knows? Maybe it will, maybe it won’t. It works in 2013.

This code will render an html page on your local network (IP address : 192.168.1.177 in this case) as follows:

websiterender1

Now it is time to persist this data. How to do this? How about nodejs and postreSQL? My relatively new friends… Here is the nodejs code which scrapes this data and moves it into a postresql database running locally.

nodejs1

Yet another image, but come on. It is only a few lines of code. Should not be an issue. I promise next blog will have “github” pull downs & forks.  Basically what the above code is doing is calling in two libraries, “request” and “cheerio” (hope they are available in 2017, but this is open source, you never know).  First “request'” library kicks in. It hooks up the uri address, loads up the body, returns the text of the h2 element. How exciting is that? Yes, very!!

But I want to run this for every 5 minutes… How do I do that? Well, I have a full time running Windows machine, how about I create a “Scheduled Task” for that and be done. Yes, Virginia. It is simple and it works. Look at all other scheduled tasks in your system if you don’t believe me.

ScheduledTasks

Now you read your data value, perhaps you want to persist it somewhere. AWS is a bit too pricey for my taste, for now I will stay local. How about a table inside a postgreSQL database. Here it is:

Postgresql1

So that verifies that our data is being persistent and we are collecting it. Yeah! That is how I roll, or not. Who cares if no one would see your data?

How about a simple Rails app running on Raspberry Pi? Anyone? Yeah, okay me. After two hours of installing Rails on Raspberry Pi, found here, you will have it running in no time* (it took me over an hour to get it working)

Then you can write an app like this:

Homeye1

Note that I am using a 3rd party library called jquery.flot.js. Then index.html.erb is this:

indexhtmlerb

Now that you have a web page to render your temperature data for a given date. If lucky, you can see the actual rendering here. How cool is that?

FinalRendering

And it runs on Raspberry Pi. How cool is that? Well, the answer is always “it depends”. But what I have done and showed is using Arduino, NodeJs, PostgreSQL, Raspberry Pi, and Rails to collect and present temperature data.

Since then I have acquired this excellent book and completed the project within. Awesome book, by the way. It introduced me to Fritzing and other components and tools. Easy to follow, highly recommend it.

DistributedData

Well, now that this project is wrapped and published, I am looking at other ideas and suggestions with Arduino. I am also attending arduino.mn once a month as schedule allows to exchange other ideas and collaborate with others. Bluetooth, zigbee, wifi are all interesting and I have to explore more.

Please let me know what you think of this and love to hear your feedback.

Happy Hacking!

Wednesday, December 12, 2012

Imation Online Backup Services–discontinued

Due to economic situations, and being late in marketing coupled with a lack of vision, this service has been discontinued. Please continue for historic reasons. Thanks. – 10/7/2013

Pleased to announce wide availability of the application I have been working on since late 2010. Its official name is Imation Online Backup. Basically it is a cloud based data backup and restore service with a few differentiators than the rest.

Working closely with the development team, my role has been in the deployment pipeline touching various technologies from the front-end to the back-end, including anything in the middle. It has been a great learning experience all the way from payment configuration to setting up of the back-end servers and I am glad to be part of it. We are now shipping, although more work remains which is exciting yet challenging.

sign-on-page

What does it do?

Two things. First, it backs up your personal data with strong encryption and compression to the “cloud” (aka “a bunch of servers residing some where in some data center(s)”. Then it restores your data when you need it, in case of disaster, such lost off hard drive, accidental data deletion, corruption, etc. Pretty simple.  You go to a portal, create an account in a few steps, download the desktop client, install, login with your credentials and configure a backup dataset collection that is important to you. Finally hit “Back up” and schedule at your convenience. Then “Restore” as needed.

Why Imation Online Backup?

With the increased focus on data security, protection and availability, we offer a few differentiators and more to come.

The Imation Online backup comprises a front-end desktop client runs on your desktop and/or servers while communicating with a multi-node distributed back-end hosted in our data centers. A few high-lights and differentiators of this solution:

  • Even though initial backups can take some time, your restores are very fast using a patented technology based on local cache/shadow copy services
  • By design, your data is stored at least on two physical nodes giving you extra redundancy where it takes extra configurations with other solutions
  • Uses a block-level encryption and compression thus your data is safe and stored effectively
  • Unlimited versioning of your files and databases with incremental backups which occur very fast and minimum overhead
  • Download your files via any supported browser without having the need for the desktop client
  • Support for SQL Server and Exchange servers
  • Customized user portal for management of your devices and data, including notifications, scheduling and billing

As we are adding new features, wider device & OS support, we are excited to have new subscribers and getting their feedback. And it is nice to be shipping again. Give this a shot and let me know what you think. Happy Holidays!

Saturday, December 17, 2011

Tapkan Top Three of 2011

Well yet another new year is upon us, I would like to share with you my list of top three. It certainly has been a productive year and without further due, here is my list:

ipad2_picture

1.  iPad 2

Got my mine a few weeks after the initial release and had it inscribed as well (being geek), but it certainly has been a game changer in many ways I consume news, read books and communicate. The battery life, size, weight, and dimensions are almost perfect. The only gripe I may have is the screen resolution. The Retina display for the next generation of iPad will be very nice. The other nice thing going for iPad is the corporate executives are all over it and they are forcing the corporate IT to adopt and accommodate them in the network. I am certainly taking advantage of this, even though I am not an executive. This is certainly not the case with Android tablet at this time.

With the acquisition of my iPad in mid April, I stopped buying physical books. Thanks to O’Reilly Media and their facebook page, I acquired legally a slew of technical books. The Amazon Kindle for iPad is sufficient but there is also room for improvement. I did test drive the Amazon Kindle Fire for the weekend, but being an iPad user, it was not the same. And it should not be (perhaps you may argue), it is an “e-reader”. Anyway, if you are in the market and can afford one, I would highly suggest in getting an iPad.

2.  Ubuntu (Desktop + Server)

ubuntu-logo-apr08

I have been dabbling with the Linux distributions for awhile, started back in early 2000’s with the SuSE, which got acquired by Novell. I have a dedicated machine running SuSE for many years, but just over a year ago, I switched to Ubuntu, starting with 9 (I think), but 10.04 was a great stable release. I had been mostly running desktop editions but recently I also started using the servers, as part of my job. I am impressed by the features, the performance.

Being open source is both a blessing and curse, when it comes to desktop though. One can easily ditch Windows or Mac OS’es if there were support for certain applications, but there is always a need for Windows and Mac OS since not everything is available. I really like to see cloud based storage and computing environments support Linux flavors. Netflix, Amazon Kindle Reader and Evernote. If you’re reading that goes out to you. Get on aboard!

3.  VMware vSphere Hypervisor™ (ESXi)

vmware-esxi-client-connect

Hypervisor! Huh, what is that? Well, that is about virtualization, my friends. I am not going to discuss what virtualization is here, but if you are developing and writing software and targeting many other platforms, you have to know about this stuff. I had been dragging my feet in getting a decent hypervisor because of time and resources and the difficulty of perceived configuration.

esx21admin_architecture

Fear no more! ESXi is a breeze to install and it is FREE (so is Ubuntu) for one server up to 32 GB of RAM. As recommended by two partners at work, we downloaded and installed on one Dell T310 server (initially 8 GB of RAM, then we upgraded to 32 GB). I have been concurrently running 3-4 configurations with very little contention. Although I really have not done any bare metal comparison but they run quite decent. A subsequent system crash at home (my poor Ubuntu desktop on DIY hardware ;-( created an opportunity to try it at home/office, and it certainly changed my computing landscape. FTW! I have consolidated my old VHDs onto vmWare and haven’t looked back. Go check them out!

xmas-tree

What an amazing year with certainly sad news! “Staying foolish, staying hungry”[Steve Jobs], looking forward to 2012… Seasons’ Greetings Everyone!

Happy Coding,

Baskin

direct links:
[1] http://www.apple.com/ipad/
[2] http://www.ubuntu.com/
[3]http://www.vmware.com/products/vsphere-hypervisor/overview.html

Friday, September 9, 2011

Case for Cloud/Online Storage and why you need one

Increasingly with the information and content we generate we rely on various devices to store our digital data. Recently this has been emphasized with the fact that we all use some sort of digital medium at an increasing rate. The content we generate on a day-in/day-out basis is not always in our control and this can become an issue when the content leaves our hands or when you lose it unexpectedly.

For example, I typically use external HD's in various sizes to move content around and archive. As early as few years back, I have purchased a 160GB Seagate Travel drive, then got a few more later 250GB to 320GB. Couple of months back I purchased a 2 TB Western Digital driver for a little over $100. Now just yesterday read about Seagate's announcement of availability of a 4 TB drive. As you see capacities are growing, but our digital media are getting scattered than ever. Oh, don't forget the fact that these devices can and will fail...

So what are the alternatives? In the last six months or so, as part of my job at Imation, I have been involved in planning, configuring, and testing various cloud storage technologies and benchmarked a few along the way. These include Mozy, CrashPlan, Carbonite, Nine Technology, Remote-Backup systems, Backblaze, etc. I have done comparisons in terms of speed, back-end efficiency (as applicable, compression, deduplication), and of course, usability. All of them works and they do a decent job depending on your requirements and environment settings, your flavor of the OS, etc.

iCloud

Also Apple's iCloud is just around the corner if you are a Mac-Head. Google has started integrating its Android content with Google+ and I am sure Microsoft and others won't be too far behind by pushing their service users' digital content to their online services and applications.

But don’t wait! If you haven't checked out a cloud/online storage solution by now, you are certainly missing out and leaving yourself vulnerable to an unexpected external hard disk crash or lost media by any means (my dog ate my SD-card ;-). If you have tons of content and limited bandwidth, some do off-line cloud seeding which is a way to load your data off-line to their cloud. This involves shipping a physical medium to the cloud storage vendors’ site.

So go check out those and many other Cloud Storage Vendors and decide which one you like best. I have a few favorites and you should ping me if you want to hear about them. Meantime, do your homework and consolidate your digital content online. Concerned about privacy? Really? (just kidding, that is probably another blog post coming out later).

Baskin

Sunday, April 3, 2011

404–This is not the web page you are looking for

On a given website, not all users will type the right link, and your ASP.NET MVC pages will have the exact controller action to render a given view.  In such cases, you need a nice 404 page.  You can go as fancy as github’s 404.  Check this out.

github_404

You can achieve this and similar other errors tapping into the Application_Error handler in your Global.asax file:

1 protected void Application_Error(object sender, EventArgs e)
2 {
3 var app = (MvcApplication)sender;
4 var context = app.Context;
5 var ex = app.Server.GetLastError();
6 context.Response.Clear();
7 context.ClearError();
8 var httpException = ex as HttpException;
9
10 var routeData = new RouteData();
11 routeData.Values["controller"] = "Error";
12 routeData.Values["exception"] = ex;
13 routeData.Values["action"] = "http500";
14 if (httpException != null)
15 {
16 switch (httpException.GetHttpCode())
17 {
18 case 404:
19 routeData.Values["action"] = "http404";
20 break;
21 // implement other http statuses here as well.
22 }
23 }
24 IController controller = new ErrorController();
25 controller.Execute(new RequestContext(new HttpContextWrapper(context),
26 routeData));
27 }


Then add an ErrorController.cs file and implement the below:


1 namespace Http404NotFound.Controllers
2 {
3 public class ErrorController : Controller
4 {
5 public ActionResult Http404()
6 {
7 return View();
8 }
9
10 // implementation of other status can go here...
11 }
12 }

Using the new Razor syntax and file format, The Http404.cshtml can look like this:


1 @{
2 ViewBag.Title = "Http404";
3 }
4
5 <h2>Http404</h2>
6 <img src="../../Content/images/github_404.PNG" />
7
8

You can find more MVC Cook book recipes in ASP.NET MVC 2 Cookbook.


Happy Coding!

Thursday, December 30, 2010

Red-Green-Refactor

Quite possibly the last blog of the year, and want to wish everyone a Happy and Productive 2011.  Well, what a year it has been.  I should probably do a 2010-Retrospective at a later post, but for now, I want to stress out the importance of unit testing.

Currently with my team, we are working on a partner integration project where we call a few web services to initiate a process, and in return expose a few services (URI’s) to receive process results.  We are using WCF, C#, .NET 3.5, Linq-to-SQL, and SQL Server as the backend.

Although a few months into design, specification and development of the project, we have made decent progress.  There are two separate projects used for testing the code base.  The code has been factored out into three separate projects, Services – Business and Data Access, which is quite typical for .NET enterprise type projects. 

Not sure if you belong to the camp of testing your code, or you rather use the FDD method (Faith-Driven-Development : “I believe my code will work”).  This is plucked from a relatively recent tweet by @lazycoder, but I think he also retweeted.  You have to look into it to see who originally coined the term.  But FDD is a practice which one can easily follow especially when you are time constrained and just want to call “I am done”.  I hope you belong to the TDD camp (Test Driven Development).

So, why unit test?  I have seen and read many blogs about this, but rather share with you my opinion why and how it works for me.  First, and foremost, it makes me to “think” about my code and what I want to deliver.  You have one piece of feature you want to develop and develop well.  Once you test and worked out the feature-set, you move to the next. 

It makes you think writing code that is decoupled from each other.  Ask yourself. How many times do you “new up” an object in your method?  Can you do dependency injection?  By the way, there is a great book (currently in MEAP) by Mark Seeman which I have reading along, called Dependency Injection .NET.  I suggest you check it out. Anytime you create an instance of an object in your method, you are creating coupling.  This could cause some pain when those instances are not available and you decide not to unit test.  But you already know that right?

Majority of the time, you get to see and test for the edge cases even though they could be overlooked. So you can easily fake/stub/mock an object and create the test case for those edge cases.  You can plan and execute exceptions in isolation.  Don’t let the user/interface find out what you are about to do. 

Increased Decoupling

Can’t say more.  Do you want decoupled code?  Better design?  You should unit test. Refactoring, what an easy thing to do!  Because unit testing allows you to focus on one thing at a time.  Not the three-four levels up or down.

In short (and I want to keep it short), if you are writing enterprise applications, I see no way around doing it without unit tests (plus integration tests).  Oh, before I leave, please check out Resharper which is a great plugin for Visual Studio which especially helps with the process plus many other enhancements.

Thanks for reading and Happy New Year!

Baskin