Danny's Tech: Where West and East Intersect

Thursday, November 15, 2007

Virtualization of Embedded Systems

Press release from Zilog about VirtualLogix caught my attention on how embedded systems are finally getting it. VirtualLogix seems to be an up and coming hypervisor provider. I'm sure there are others, but I haven't looked too hard....

Copyright 2007, DannyHSDad, All Right Reserved.

Labels: ,

Virtualization: Sun and HP

It seems that Sun has thrown its weight behind Xen: "Sun Bids $2 Billion To Join Virtualization Gold Rush: The company will commit R&D dollars to its Xen-based hypervisor -- xVM -- for generating virtual machines and its Sun xVM Ops Center for managing them."

HP, on the other hand, has their "Virtualization and Power Management Technologies" which isn't clear from news articles what it really is. It seems that most writers (journalists?) do not have a clue what virtualization is all about....

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: ,

Tuesday, November 13, 2007

BIOS to Hypervisor: Pheonix's transformation

I guess I missed this article: "BIOS maker Phoenix Reinvents Itself As Virtualization Vendor."

Phoenix have been working on HyperCore hypervisor and Linux platform called HyperSpace. Only time will tell but one advantage they have over other hypervisor vendor is that they have huge market presence as BIOS vendor. Probably better than Intel Inside since the BIOS works with all chip vendors.

Overall, my impression is that Phoenix is competing with VMware but the Phoenix CEO is quoted:
"VMware is building 18-wheelers; we're just building a little motor scooter"
We'll see...

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: ,

Hypervisor is hot: Oracle and Microsoft

"Oracle takes on VMware, others, with its own hypervisor: Oracle VM introduced at Oracle OpenWorld event" and "Microsoft to offer standalone hypervisor".

Seems that VMware is getting more competition these days. And I'm still waiting for the other shoe to drop: IBM, the granddaddy of hypervisors!

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: ,

Monday, September 17, 2007

Virtualization at home and at work

"Virtualization homes in on desktops" points out how virtualization can have a place at home, like having a secure partition for online banking applications. They do point out one weakness of virtualization: requirements for lots of memory (each partition need to have independent set of memory, which can add up when you have more than 2 partitions -- there are ways to make the read only memories to be sharable (like same copy of the OS), but it's all easier said than done since current crop of OS wasn't written for virtualization).

And slightly older news: "VMware dangles next-gen virtualization goodies." Ideas for fault-tolerance would be a great usage for virtualiztaion.

So, there are few areas of virtualization which can use some work: Unfortunately, I don't have the energy to give it much thought, these days (you can see that I haven't been posting much recently).

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels:

Thursday, September 13, 2007

Virtualization week: VMware and more

This week, VMware hosted a conference this week and much news came out. Things that stood out for me:
  1. Hypervisor in the hardware: Companies are now looking to have their hypervisor (virtualization software) be part of the hardware. Unfortunately, IBM already has a leg up on them all since their hypervisor is found in their various PowerPC hardware [and probably any new x86 hardware, if they are still making any more of such servers].
  2. JeOS (Just Enough Operating System): Canonical is releasing JeOS, a virtualization-specific Ubuntu Linux. This is a variation of KVM, but hopefully will be less painful to use than KVM (which requires specialized QEMU). Unfortunately, there are no official info about it (that I can see with Google, this morning).
  3. Virtualization standard container proposed: Partitions (the instance of OS+apps running on the virtual machines) are saved in proprietary formats today, but Xen, VMware and Microsoft are going to standardize with one format. That's good for users of hypervisors, esp. if hypervisors becomes plug and play, or you want to move a partition from one hardware with Xen and another hardware with Viridian. Pretty soon they'll be talking about run-time version management.
Lots of good news coming out for the world of virtualization.

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: ,

Saturday, September 08, 2007

Future of Virtualization

Now that VMware has gone IPO and XenSource bought by Citrix, virtualization has made it to the financial press. However, the future isn't so clear since there are other players like Microsoft with their Viridian and System Center Virtual Machine Manager 2007 (SCVMM07) [which will support Xen and VMware in the future] and smaller companies like Virtual Iron Software.

And don't forget the granddaddy of them all: IBM. They have released at least 2 public versions of their hypervisor: rHype and sHype (via Xen). Since they have been at it for few decades on their mainframes and workstations, you can be sure they have both the depth and the breadth in understanding virtualization.

With all that written, virtualization "out in the wild" is rather a new phenomenon. The x86 virtualization hardware like Intel's Vanderpool and AMD's Pacifica came out in large volume in 2006 and even today people are still trying to figure out what it means and how best to use virtualization.

This is like the web in 1995, when Netscape IPO'd. Internet was around before browsers but the momentum started with Netscape browser in 1994. Microsoft piled on with Internet Explorer, and many others joined the race (like Opera and Apple's Safari). Today, Netscape morphed into Mozilla with Firefox browser. And Google rules the web "world."

So, 12 years from now, it's hard to say what virtualization would look like. I believe that there will be many shakeouts along with new comers to usurp the current front runners. And as with the web winners [of 2007] (like Google, Ebay, Amazon and Myspace), I personally won't place any specific bets on where virtualization will lead to in the future...for now....

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels:

Saturday, June 16, 2007

Virtualization Versioning

We have configuration management for dealing with static items like source code and compiled binaries and even PDF documents.

However, there is, as far as I can tell, no version control for dynamic execution environment. In the days of virtualization (today), it is now possible to selectively specify the exact versions of kernel, dynamic libraries (DDL for Windows people) which are the ideal for a specific version of an application. This way, the application developer can specific the exact versions that will work with a given program. And then the user can mix and match newer or older versions of libraries and appropriate patches to ensure a reliable run-time of the application. The user can also control for variation of libraries for optimum memory usage or run-time performance or run-time stability.

Virtualization Run-Time Configuration Management (VRTCM or virtchim?) is what I have in mind.

Note that this would be a simple control of the various kernel and libraries out there that can be safely linked into a given app.

The next level would allow for virtualization of the kernel/library interfaces so that mismatched interfaces can be made to work dynamically (force incompatible versions to run together, in case one needs a newer/older library which is more secure or more reliable but not supported by the user's application).

Another level of virtualization would be to allow for user defined (for those roll-your-own hackers) patches to run in place of the official binaries available. I suppose those patches available online can be used by others as well. By using virtualized environment, the risks of virus and corruption can be much easier to control and be contained.

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: ,

Thursday, March 29, 2007

Virtualization Performance Gap

"Load Testing a Virtual Web Application: Measuring the Performance Impact of
Virtualizing a Web Application Server
" shows how bad virtualization impacts the performance of a processor with hyperthreading. Not a trivial problem since the performance drop is rather significant.....

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: ,

Saturday, March 24, 2007

JPC: java PC simulator

"JPC: Computer Virtualization in Java" sounds like a promising project (hat tip to /.). However, I'm not impressed after using the demo: the initial interactivity is good but on Java 1.5 after few seconds of playing the lag between input and output turns pretty much unusable (was playing Invader). There is probably some kind of garbage collection timing problem, I would assume since it hangs for a split second and then the image speeds up to real time and pauses or slows down. Not very useful for me (on 2.13 GHz Pentium M with 2GB of RAM).

I suppose newer 1.6* Java might help but still....

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: , ,

Tuesday, March 20, 2007

Virtualization Saves Money

For computer buyers. "Virtualization leads IDC to cut server forecast." It's bad news for chip and computer makers but great news for those who have to buy them. With virtualization, you get more bang for your buck which translates to better usage of existing hardware which means lower capital cost which means companies who provide computer services (web servers, etc.) can pass on the savings to their customers.

Like any technology break throughs, it takes time to see real savings but once it catches on the price change is permanent. I love how technology gets faster, better and cheaper over time....

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels: ,

Wednesday, January 10, 2007

Linux Virturalization info at kernelnewbies

Here's a good starting point for Linux Virtualization at "kernelnewbies" -- their regular info on Kernel hacking is good too but I'm a bit frustrated that there seems to be no direct link from main www URL to virt URL.

Labels:

Friday, January 05, 2007

KVM and beyond: bare metal apps and microapp multi-threading

Linux KVM (Kernel-based Virtual-Machine) allows Linux to be a hypervisor (or a "super" supervisor or meta-operating system) and control other OS (including multiple copies of Linux).

What KVM (or any hypervisor, like Xen or rHype) allows is for non-OS to run in a partition (the isolated environment). That is, individual applications can be written to run on "bare-metal" or OS-less environment. Or anything in between. The question is how much does the partition handle on its own without a real OS. Either you convert existing libraries to work in bare-metal mode or you make API calls outside of the partition (either through hypervisor calls or IO read/write or virtualized-HW interrupts which are trapped by the hypervisor and passed to another OS-partition or handled directly by the hypervisor (like in the case of KVM)).

This is somewhat like a microkernel, where a small kernel distributes real work outside in the user mode rather than trying to do everything in the supervisor mode. Taking this concept further, a microapp maybe the best way to take advantage of multi-core or multi-threaded system. Be it a game or simulator, it is hard for an application to take advantage of a multi-core system, since an app is usually broken down in terms of large chunks of functionality. As a programmer, it's natural for me to think in one long sequential functionality rather than minimal, small functions stitched together by a microapp "kernel." Yet, until microapps become common, I think it will be very hard to take advantage of multiple cores efficiently. The problem with microapps is with design, debug and maintenance. Debug is challenging enough on a multi-core system, and microapps would only exasperate the complexity. That is, it's easy enough to debug individual microapp but once multiple microapps are running and interacting then the fun begins! I believe that the only way to manage this complexity to have equality complex simulator/debugger.

What those complexities might be will be explored in the future....

Copyright 2007, DannyHSDad, All Rights Reserved.

Labels:

Tuesday, January 02, 2007

Virtualization Linux et al.

"Virtual Linux: An overview of virtualization methods, architectures, and implementations" is a great introduction to virtualization, much better than my attempt.

Copyright, 2007, DannyHSDad, All Rights Reserved.

Labels:

Saturday, December 16, 2006

Virtualization All the Way: SoVirt and HaVirt

Virtualization is too often talked about from a very narrow view: either raw hardware perspective or simplly the software library/drivers [so that you can install a software without disturbing others].

Virtualization is basically where an abstraction allows programmers have one less thing to worry about. Hardware virtualization (I'm thinking "HaVirt" or "HawVirt/HwVirt" is easier to say than HV or HardVirt) where the programmer doesn't have to worry about the underlying hardware. With HaVirt CPU, you can run more than one OS (or multiple copies of the same OS) on a single processor without changing the hardware or rebooting to run each instance of an OS. With virtualized storage, you can mount a drive and not worry how or where the space is mounted (unlike SAN which is managed per set of drives).

In the past (and present), virtual machines tried to virtualize programming languages by making the language independent of the hardware (JavaVM or Smalltalk VM).

In my mind, virtualization happens at the most basic level: input and output should be independent of any OS or hardware. UNIX has some ability with text based programs where IO is can be piped from one program to another. This, however, is conditional on using stdin and stdout "channels" for input and output, respectively. This should not be: I as a programmer shouldn't have to worry where and what my IO are: I get input and I dump output. Be it text or graphics or mouse/tablet movements, I shouldn't have to worry about where I get my stuff and how my output gets handled (by another program or OS). Yes, for text and graphics, I can worry about formatting but not something I should specify in detail since some things like button locations should be up to the user, not the programmer. Just as people have language preference, they should be able to chose what the default dialog box size and location and its button locations and mouse focus [along with specific setting per program types and individual programs -- hierarchy of user preferences]. Same goes for font name, font/background colors, default size, and even spacing (double space, single space, etc.). Look and feel should be dictated by user first and programmers second and override must be done with user permission. [OK, it seems I'm getting sidetracked into more UI issues than programming/virtualization issues.]

And you can argue that processing [the core of any program] should be virtualizable. That is, I as a programmer shouldn't have to worry about the underlying language of the machine I'm programming be it assembly language, pseudo code (VM) or some higher level language. What can this core be is where I'm pondering to see if there is a better way...

Copyright 2006, DannyHSDad, All Rights Reserved.

Labels:

Wednesday, November 08, 2006

VMX Builder Tutorial

"VMX Builder: Create virtual machines in minutes" seems like a simple way to build your VM using VMware tools. I don't have Linux up and running quite yet: I've installed Fedora Core 6 but haven't tried internet connection yet wired or wireless. Linux is such a pain when it comes to hardware driver....

Labels:

Friday, October 27, 2006

Everlasting Tech Boom?

Fortune has: "This tech boom has legs: For several reasons - especially growing demand in developing countries - tech's run most probably will last many years."

Let's take their two points apart:

"First, everybody wants technology." Just because they want or desire tech, it doesn't mean people can afford to. Not just the developing nation but developed ones too. For example, yours truly would love to buy a new computer [especially one with virtualization hardware built in to play with full virtualization], now that Fedora Core 6 is released with Xen support. Unfortunately, we had to spend a lot of our savings to move and settle down in SoCal. So no tech purchases for a while!

"And second, technology has become radically easier to create." I don't think so. Some things have improved [ipod is a good example: I use it every weekday on my walking commute to/from work] but computers are so hard to use still. Just today, I had to reinstall wifi driver to restore network access to my notebook. What a pain since I had to try to restore system few times to make sure something else didn't go wrong. Probably wasted about one hour of my time. Software is all wrong: not only is it buggy but also hard to use -- I'm not talking about specific GUI but the basics like having to explicitly save files or not having infinite undo/redo.

With the economy tanking, both developing and developed nations will struggle to continue the tech growth. And until things become so easy that people "have to have it" (like the ipods), it won't take off in the way that will continue growing.

Copyright 2006, DannyHSDad, All Rights Reserved.

Labels: ,

Thursday, October 19, 2006

Debug and Virtualization

Here is an incomplete thought I've been having over the past few days. When debugging programs which run on both server and client [like browser code and server code], it can be a pain to figure out what's going on where. However, somehow, I think that virtualization can be used to help abstract a layer or two so that debug can take place more independent of OS [of either or both the client and server OSes].

I noticed that when a program displays stuff on browsers, you have at least 3 layers: the code printing, the text handler and pixel handler. This is true with graphical browsers as well as text [lynx] browsers. When you have layers, then opportunities exist for virtualization. How can it be possible is something I'll have to chew on.

Copyright 2006, DannyHSDad, All Rights Reserved.

Labels:

Wednesday, August 30, 2006

Usability and better programming

The two articles seem to go together:
"Race Is On to Woo Next-Gen Developer"
and
"Sci-Fi: A New Kind of OS"

OS proposed is HAL-9000 like tool which adapts to your working style and filter your tool selections based on how you work. I would think it should be per application level all coordinated by OS. With hypervisor, you can even do it across multiple OSes!

The next-gen developer tooling: make programming easier especially in the area of dynamic languages. Things like CASE tools were to do just that in the 80's [and 90's] but they never really took off (i-Logix Statemate was the closest thing to such an ideal but its price made it unnoticed). Dynamic languages like Smalltalk should have done just what they are talking about in the article but due to licensing and price, it never had a chance. Java [and Eclipse] have the potential but it seems all too haphazard [not something I can really put my finger on -- just my feelings].

See also eweek's "The Future of Programming: Less Is More" especially about LOP (language oriented programming) where you can mix and match different languages in one programming environment. Not too far from what I've been blogging.

So you mix it all together and you get adaptive programming environment where you get prompts and suggestions on where to go or what to do next and use different languages as appropriate to the problem you want solved (even suggesting an alternative translation(s) to a different language(s)).

Copyright 2006, DannyHSDad, All Rights Reserved.

Labels: ,

Tuesday, August 22, 2006

Linux goes for lightweight OS

"Linux heavies plan lightweight virtualization: Novell and Red Hat have concrete plans to build "container" virtualization into their Linux products." This is to allow one OS to be shared across multiple partitions so that only read/write data are made unique [these are called containers -- only one and the same OS runs inside each partition] rather than both OS and data per partition [which allows different OS per partition].

If you're confused than my apologies for not writing it up well. Comment or email and I'll try again, latter].

Copyright 2006, DannyHSDad, All Rights Reserved.

Labels: