Saturday, October 10, 2009

Cloud'ed Thoughts

These were some of the questions posed to me during the cloud computing panel discussion at CSI Annual Convention 2009

Each one of you has a different view (PaaS, services, testing, startup, management) in the domain. A 5-minute warmer on your take on cloud computing based on your current work will be great. This will set the stage nicely for the discussion.
There are many “definitions” of cloud computing but for me “Cloud Computing is the fifth generation of computing after Mainframe, Personal Computer, Client-Server and the Web.” Its not often that we have a whole new platform and delivery model to create businesses on. And what's more its a new business model as well – using a 1000 servers for 1 hour costs the same as using 1 server for 1000 hours – no upfront costs, completely pay as you go!
How has cloud computing suddenly creeped on us and become technologically and economically viable? Because of 3 reasons:
  1. Use of commodity hardware and increased software complexity to manage redundancy on such hardware. The perfect example of such softwares is virtualisation, MapReduce, Google File System, Amazon's Dynamo, etc.
  2. Economies of scale. In a medium sized data center it costs $2.2 /GB/month while in a large data center it costs $0.40/GB/month. That is a cost saving of 5.7 times which cloud computing vendors have been possible to pass on to the customers. In general, cloud infrastructure players can avail 5 to 7 times decrease in cost.
  3. The third and according to me the most important reason: there was a need to scale for many organizations but not the ability to scale: As the world became data intensive, players realized that unless scalable computing, scalable storage and scalable software was available, their business models won't scale. Consider analytics as an example. Some years back it was possible for mid-sized companies to mine the data in their own data center but with data doubling every year they have been unable to keep up. They have decided to scale out to the cloud. Amazon, Google realized this from their own needs very early and look here we are eating their dog-food!
Developers with new ideas for innovative internet services no longer require large capital investments in hardware to deploy their service. They can potentially go from 1 customer to 100k customers in a matter of days. Over-provisioning or under-provisioning is no longer a factor if your product is hosted on cloud computing platforms. This enables small companies to focus on their core competency rather than worrying about infrastructure. This enables a much quicker go-to-market strategy.
Another advantage is that clouds are available in various forms:
  • Amazon EC2 is as good as a physical machine and you can control the entire software stack.
  • Google AppEngine and salesforce.com are platforms which are highly restrictive but good for quick development and allows the scaling complexity to be handled by the platform itself.
  • Microsoft Azure is at an intermediate point between the above two.
So depending on your needs, you can choose the right cloud!
As I said earlier its a new development environment and there is lot of scope for innovation which is what my company “Clogeny” is focusing on.
Cloud computing is not just about “compute” – it is also storage, content distribution and a new way of visualizing and using unlimited storage. How has storage progressed from multi-million dollar arrays and tapes to S3 and Azure and Google Apps?
I remember that when I started writing filesystems I needed to check for an error indicating that the filesystem was full. It just struck me that I have no need for such error checking when using cloud storage. So yes, its actually possible to have potentially infinite storage.
Storage: Storage arrays have grown in capacity and complexity over the years to satisfy the ever-increasing demand for size and speed. But cloud storage is pretty solid as well. Amazon, Microsoft and most other cloud vendors keep 3 copies of data and atleast 1 copy is kept at a separate geographical location. When you factor this into the costs, cloud storage is pretty cheap. Having said that, cloud storage is not going to replace local storage, fast and expensive arrays will still be needed for IOPS and latency hungry applications. But the market for such arrays may taper off.
Content Distribution: A content delivery network is a system of nodes in multiple locations which co-operate to satisfy requests for content efficiently. These nodes move the content around to serve it optimally where the node nearest to the user, serves the request. All the cloud providers offer content distribution services thereby improving reach and performance since requests can be served around the world from the nearest available server. This makes the distribution extremely scalable and cost efficient. The fun part is that the integration between cloud and CDN is seamless and can be done through simple APIs.
Visualizing storage: Storage models for the cloud have undergone a change as compared to the POSIX model and relational databases that we are used to. The POSIX model has given way to a more scalable flat key-value store in which a “bucket-name, object-name” tuple points to a piece of data. There is no concept of folder and files that we are used to. Note that for ease of use a folder-file hierarchy can be emulated. Amazon provides SimpleDB, a non-traditional database which is again easier to scale but your data organization and modeling will need to change when migrating to SimpleDB. MapReduce is a framework to operate on very large data sets in highly parallel environments. MapReduce can work on structured or unstructured data.
Consider this as an example, there is a online photo sharing company called SmugMug which estimates that it has saved $500,000 in storage expenditures and cut its disk storage array costs in half by using Amazon S3.

CC breaks the traditional models of scalability and infrastructure investment, especially for startups. A 1-person startup can easily compare with an IBM or Google on infrastructure availability if the revenue model is in place. What are the implications and an example of how?
Definitely, startups need to only focus on their revenue model and implementing their differentiators. The infrastructure, management and scaling are inherently available in a pay as you go manner so that ups and downs in traffic can be sustained. For examples, some sites get hit by very high traffic in first few weeks and need high infrastructure costs to service this traffic. But then the load tapers off and infrastructure lies unused. This is where the pay as you go model works very well. So yes, cloud computing is a leveller fostering many start-ups.
Also many businesses are using cloud computing for scale-out whereby their in-house data center is enough to handle certain amount of load but when load goes beyond a certain point they avail the cloud. Such hybrid computing is sometimes more economically viable.
Xignite employs Amazon EC2 and S3 to deliver financial market data to enterprise applications, portals, and websites for clients such as Forbes, Citi and Starbucks. This data needs to be delivered in real-time and needs rapid scale up and scale down.
What do you see when you gaze in the crystal bowl? 
Security is a concern for many customers but consider that the most paranoid customer – the US government has started a cloud computing initiative called “App.gov” where they are providing SaaS applications for federal use. Even if there are some issues, they are being surmounted as we speak. Cloud computing has now reached a critical mass and the ecosystem will continue to grow.
In terms of technology, I believe that there will be some application software running on-premise and another piece running on the cloud for scaling out. The client part can provide service in case of disconnected operations and importantly can help to resolve latency issues. Most cloud computing applications will have in-built billing systems that will either be a standard or software that both the vendor and customer trust. I would love to see some standards emerging in this space since that will help to accelerate acceptance.
Over the long term, absent of other barriers, economics always wins!” and the economics of cloud computing are too strong to be ignored.

A "Cloudy" day at CSI Annual Convention 2009


I had a very interesting opportunity to be one of the speakers on the panel discussion on cloud computing at CSI Annual Convention 2009. As it turned out the entire day was "cloudy" with most topics and discussions being centered around cloud computing. Most people agreed that cloud is the next generation of computing but there are still doubts as to which form of cloud computing will take off. The conclusion is that there IS a lot of hype and when that has died down, the products and companies who solve real problems will survive. People who try to monetize the medium instead of the product, might end up failing. Here are some of the excerpts from the day.

The day started with a keynote address on "Cloud Computing - Challenges and Opportunities" by Girish Venkatachaliah from IBM. His take was that about 20% of IT will move to the cloud in next few years and currently its more hype than substance.

Dr. Srikanth Sunderrajan from Persistent gave a great talk on Google AppEngine, a Platform-as-a-Service offering. His company recently implemented a product on top of Google AppEngine. His take was that AppEngine lacks many features and is a strait-jacket environment with almost no flexibility. They had to write complex libraries to enable file-system like storage and ended up using Amazon EC2 to aid the short-comings of AppEngine. His take was that Google needs to open up the platform and be more like Amazon's cloud offerings. One good thing about AppEngine is that development and deployment is fast and easy.

The panel discussion on cloud computing included Monish Darda from Websym, Karan Gujral from BMC , Gireendra Kasmalkar from SQS, Vikram Rajkondwar from Microsoft, Samir Bodas from ICERTIS and yours truly. The discussions covered PaaS, IaaS, SaaS, testing for the cloud, how can startups leverage the cloud, managing the clouds and much more. Vikram's views which stemmed from his experience working on Microsoft Azure were extremely insightful.

Here are some of the take-away points from the discussion:
  • The cloud phenomenon has been seeded due to the economies of scale. The cloud infrastructure providers use commodity hardware and use complex software to manage redundancy. The savings are passed on to the consumer making the cloud a very cost effective platform.
  • Evolution of virtualization technologies has enabled cloud data centers to increase efficiency. All parts of the stack will be virtualized as we progress.
  • Storage is an important aspect of the cloud. 3 copies of data are maintained by the cloud vendors so in terms of reliability to cost ratio, cloud storage is on par or cheaper than local storage. And unlimited storage is available on a completely pat as you go model.
  • Cloud is very interesting medium for testing and QE since these phases are needed late in the SDLC and require investment in terms of hardware and provisioning. Clouds make it possible to do functional and scale testing without upfront investment.
  • The most compelling use of cloud computing is when load and usage cannot be predicted. Cloud can be used to augment local data center - for scaling out when load exceeds certain levels. Such hybrid clouds will be the future of data centers. Another prime usecase is when loads are periodic - in-case of on-premise data centers this leads to low utilization and hence lesser ROI. Clouds can be provisioned as needed improving the ROI for such companies.
  • Today even a 1-person startup can compete with Google and IBM in terms of infrastructure. If a good revenue model is in place, then startups can use the pay as you go model to their advantage. Companies like SmugMug, ElephantDrive has done just this to keep up with their phenomenal growth. Without clouds, their growth would have stymied as they would not have had scale out capability.
  • The data center management companies will need to upgrade their products to manage the clouds. They will have to look at provisioning, job scheduling, profiling for the cloud along with the on-premise data center.
  • Everyone agreed that on-premise data centers will never be replaced by the cloud. They will be augmented. A lot of web hosting will move to the cloud though.
The conclusion was that companies and consumers should try to look through the hype and try to identify solutions that actually solve their problems. Every little software when provided as Software-as-a-Service does not become a better solution. If you find your sweet spot in the cloud, you are poised for phenomenal growth.

Thursday, September 24, 2009

Talk on Lustre at FOSS.in/2008

FOSS.IN is one of the world’s largest Free and Open Source Software (FOSS) events, held annually in India. The event is highly focussed on FOSS development and contribution. Over the years, it has attracted thousands of participants, and the speaker roster reads like a “Who’s Who” of FOSS contributors from across the world.

Last year I had the privilege of giving a talk on "Lustre: A Scalable Clustered Filesystem" at this event. This is one of the few events with a very techie agenda and I had some interesting discussions with the delegates. The breakout sessions where hackers sit together and actually code up a feature is really cool. Not many events would have people actually coding!

Here is my presentation. It describes the architecture of Lustre - a distributed, clustered filesystem which runs on 7 of the top 10 supercomputers. It goes on to describe some of the cutting-edge features that are being planned for future Lustre releases.



Tuesday, September 22, 2009

Inspirations from TiECon Delhi 2009

It isn't often that you get a chance to rub shoulders with industry leaders and successful entrepreneurs. TiECon gives you a chance to connect and interact with founders of successful companies, venture capitalists and budding entrepreneurs. It was great to be in presence of people with amazing clarity of thought and expression.

As I was contemplating what is the best way to structure this blog, I remembered attending some of the panel discussions where certain quotes and thoughts just resonated with my mind. It's like you have a gut feel of certain things but when they are put in perfect, concise words it becomes easy to put it into action. So the best way to express what I saw and learnt is in the form of quotes that I gathered personally or in discussions.

  • "Entrepreneurship is a difficult word to define, an entrepreneur has a difficult path to choose and a difficult path to tread. Many succeed and many fail. An entrepreneur who trips and falls down once; if he is a true entrepreneur will pick himself up and walk the same path or a different path with greater determination. An entrepreneur chases a dream, pursues an idea, and seeks a goal. So I think there is much to be said about entrepreneurship and any organization which promotes entrepreneurship rather than simple businesses." Mr. P. Chidambaram, Home Minister, Ministry of Home Affairs, Government of India
  • "Good entrepreneurs react differently to tough times" Deep Kalra, Founder & CEO, MakeMyTrip.com
  • "Thinking how soon I can 'breakeven' is a big fallacy. We need to think how will we scale up" Achal Ghai, Managing Director, Avigo Capital
  • "When the brand is in experimental stage, even spam works, especially in India" Manish Vij, Co-Founder and Business Head, Quasar Media
  • "80-90% of requests for venture capital get rejected due to lack of a marketing plan" Achal Ghai, Managing Director, Avigo Capital
  • "Only when you have done enough 'pilots' and have customers who can be brand ambassadors should you think of doing advertising" Manish Vij, Co-Founder and Business Head, Quasar Media
  • "For your core team you need like minded people and people who can work with equity" Dhruv Shringi, CEO & Co Founder, Yatra.com
  • "A part of your skill as an entrepreneur is to be a good salesman" Yashish Dahiya, Co Founder & CEO, Policy Bazaar.com
  • "Its better to be No.1 in a niche market than No.20 in a large market"
  • "You must be able to state your core value proposition in a single sentence"
  • "Be non-conforming"
  • "Leadership is about action not position"

Tuesday, August 25, 2009

Compression, encryption and deduplication

When you are doing backups or archival - compression and encryption is a must. Compression is necessary to save precious bandwidth while encryption is necessary when you store your data outside the boundaries of your data center/home. Now considering that you are archiving data for long periods, you would want to deduplicate it to reduce the amount of storage. But encryption/compression are quite incompatible with deduplication. Deduplication tries to find same blocks in the data set so that the blocks can be shared and storage space can be saved. But even a small change in your data could cause the encryption/compression algorithms to produce very different outputs, fooling the poor little deduplication engine. Deduplication loves patterns in the data and good encryption algorithms try hard to remove patterns from the data for better security.

Some googling led me to rsyncrypto and rsyncable-gzip - where the rsync algorithm is modified to be compression/encryption friendly. rsync has an excellent algorithm with which only the changed parts of the data need to be sent over the network for syncing data sets.

rsyncable-gzip is a patch to gzip which cause the compression to be done in chunks rather than processing the entire file in one go. This localizes changes within the compressed binary allowing rsync to do a better job. This can lead to lower compression ratios in some cases.

rsyncrypto modifies the standard encryption schema by localizing the effects of encryption to keep the side-effect changes minimal. This again allows rsync to work much more efficiently. Again this may reduce the efficiency of the encryption algorithm but it will still be good enough for most use cases.

Now obviously this problem has an easier solution - deduplicate first, then compress, then encrypt. But this flow may not be possible always.

Wednesday, August 5, 2009

Rackspace CloudFiles

CloudFiles is a cloud storage offering from Rackspace. You can use it for archiving and backing up your web-based storage data. The data is replicated across 3 locations giving you excellent redundancy.

I tried out their offering and here are some of my notes:

- Max file size of 5GB - most cloud storage vendors have these limits mostly due to protocol limitations
- Along with CloudFiles they provide Content Distribution Network(CDN) for distributing data across their data centers around the world.
- 15 cents/GB/month storage cost. 8 cents inbound and 22 cents outbound/GB
- APIs in multiple languages - PHP, Java, C#, Python, Ruby
- They have a "browser panel" which can be used to upload files and distribute using CDN. - Mozilla extension + Mac app to access CloudFiles storage is available. Not maintained by Rackspace though and *not* a backup tool, its just a front-end.
- Provides SSL for security
- Tokens expire every 24 hours and clients need to reconnect. Every operation must use a authentication token.
- FireUploader simulates a hierarchical file system structure which is not possible with CloudFiles.
- Cyberduck used on Mac is a GPL software and a good frontend for CloudFiles

- The only way user can access the content from another account is if they share their username/API access key or a session token.
- Files are called "objects" in Rackspace lingo. Data is saved as-is without compression/encryption. It does support metadata in form of key-value pairs so you can "tag" files and organize your data. Metadata is limited to 4KB and max of 90 individual tags can be stored.
- You can enable CDN on certain containers. Each CDN enabled container has a unique Uniform Resource Locator(URL). For example a CDN enabled container named "photos" can be referenced as http://cdn.cloudfiles.mosso.com/c3131 - if this container has an image named "baby.jpg" then that image can be served through LimeLight Networks CDN with the URL of http://cdn.cloudfiles.mosso.com/c3131/wow1.jpg. This is how we can enable SaaS applications to use data stored on our customer's accounts.
- Code snippets and documentation of APIs is good and code examples are available.
- The APIs are Restful i.e. Representational State Transfer protocol.

- You need to do a GET call on the account to get a list of all containers in the account. You can get this information in JSON or XML format as well.
- Max of 10000 container names can be returned at a time. You can make a continuation call with a "marker" if you want to retrieve later containers.
- HEAD call is used to find out number of containers in the account and the number of used bytes
- GET operation on a container is used to list objects in the container.
- Pseudo hierarchical folders/directories
- Users will be able to simulate a hierarchical structure in Cloud Files by following a few guidelines. Object names must contain the forward slash character ‘/’ as a path element separator and also create “directory marker” Objects, then they will be able to traverse this nested structure with the new “path” query parameter. This can best be illustrated by example: For the purposes of this example, the Container where the Objects reside is called “backups”. All Objects in this example start with a prefix of “photos” and should NOT be confused with the Container name.
In the example, the following “real” Objects are uploaded to the storage system with names representing their full filesystem path.

photos/animals/dogs/poodle.jpg
photos/animals/dogs/terrier.jpg
photos/animals/cats/persian.jpg
photos/animals/cats/siamese.jpg
photos/plants/fern.jpg
photos/plants/rose.jpg
photos/me.jpg

To take advantage of this feature, the “directory marker” Objects must also be created to represent the appropriate directories. The following additional Objects need to be created. A good convention would be to create these as zero or one byte files with a Content-Type of “application/directory”.
photos/animals/dogs
photos/animals/cats
photos/animals
photos/plants
photos

Now issuing a GET request against the Container name coupled with the “path” query parameter of the directory to list can traverse these “directories”. Only the request line and results are depicted below excluding other request/response headers.

GET /v1/AccountString/backups?path=photos HTTP/1.1

photos/animals
photos/cats
photos/me.jpg

To traverse down into the “animals” directory, specify that path. GET /v1/AccountString/backups?path=photos/animals

photos/animals/dogs
photos/animals/cats

By combining this “path” query parmater with the “format” query parameter, users will be able to easily distinguish between virtual folders/directories by Content-Type and build interfaces that allow traversal of the pseudo-nested structure.

- DELETE call on container will not succeed if it has objects
- HEAD operation on object is used to retrieve object metadata and standard HTTP headers
- GET call is used to used to retrieve object data. It supports headers like If-Match, If-None-Match, If-Modified-Since, If-Unmodified-Since. It is possible to get a range of bytes as well.
- PUT call is used to write or overwrite an object's metadata and content. End-to-end data integrity can be ensured by including an MD5 checksum of your objects data in the Etag header.
- Chunked requests can be sent if you do not know the size of the object you are PUT'ing but total size must be <5gb>

Short-comings mentioned by Rackspace themselves:
- cannot mount or map CloudFiles account as a network drive.
- files cannot be modified so block level changes cannot be done.
- containers cannot be nested
- No ACLs for security

cURL is a command line tool available on most UNIX environments and it allows you to transmit and receive HTTP requests and responses from the command-line or from within a shell script. So you can work with the ReST API directly instead of using client APIs. I used the cURL tool to test out the calls provided by CloudFiles.

Friday, May 15, 2009

auto-login using rsh

I know that rsh is deprecated but it is used all over the place in the test harness for the product I am working on. I needed to enable auto-login for rsh and rcp but Google didn't give definitive answers. So here is how you should do it:

- Install rsh-server RPM on the machines which will be communicating
- In /etc/xinetd/{rsh, rexec, rlogin} change "disable = yes" to "disable = no"
- Add rsh, rexec and rlogin to /etc/securetty
- Restart xinetd service:
service xinetd restart
- Add "hostname username" combinations in ~/.rhosts file. For example if you want to allow root@garfield access then add "garfield root".

Fun with assembly level debugging

Investigations into a failed regression test revealed that some allocations were breaking page boundaries and when these allocations were passed as buffers to bio's - the read operation failed. The reason for the failure was that DASD requires the bv_len to be on blocksize boundaries whereas if the earlier allocation was breaking across page boundaries without blocksize alignment. It was suspected that the failure was in the dasd_eckd module in dasd_eckd_build_cp():

rq_for_each_bio(bio, req) {
bio_for_each_segment(bv, bio, i) {
if (bv->bv_len & (blksize - 1))
/* Eckd can only do full blocks. */
return ERR_PTR(-EINVAL);

We needed to make sure that this was the actual point of failure but since we cannot modify the kernel on IBM zSeries machines, we needed to use console debugging to confirm the suspicion.

Firstly install kernel-debuginfo package which you will need for debugging.

* Get disassembly code of the module using objdump

objdump -d /lib/modules/2.6.9-67.EL/kernel/drivers/s390/block/dasd_eckd_mod.ko > dasd_eckd_dis

0000000000001ef0 :
1ef0: eb 6f f0 48 00 24 stmg %r6,%r15,72(%r15)
1ef6: c0 d0 00 00 00 00 larl %r13,1ef6
1efc: b9 04 00 ef lgr %r14,%r15
1f00: a7 fb fe 40 aghi %r15,-448
1f04: c0 c0 00 00 00 00 larl %r12,1f04
1f0a: e3 e0 f0 98 00 24 stg %r14,152(%r15)
1f10: e3 30 f0 e0 00 24 stg %r3,224(%r15)
.....
.....
2134: a7 0a ff ff ahi %r0,-1
2138: a7 aa 00 09 ahi %r10,9
213c: 18 51 lr %r5,%r1
213e: e3 e0 c0 00 00 04 lg %r14,0(%r12)
2144: e3 90 f1 18 00 04 lg %r9,280(%r15)
214a: 58 20 90 08 l %r2,8(%r9)
214e: 18 12 lr %r1,%r2
2150: 18 32 lr %r3,%r2
2152: 14 10 nr %r1,%r0
2154: 88 30 a0 00 srl %r3,0(%r10)
* 2158: a7 74 04 53 jne 29fe
215c: e3 10 90 00 00 04 lg %r1,0(%r9)
.....
.....
29fe: a7 29 ff ea lghi %r2,-22
2a02: a7 f4 06 1b j 3638

The line with the '*' was the suspected line which was doing the bv_len alignment check. The jump from this line to dasd_eckd_build_cp+0x29fe confirmed this. EINVAL = 22 and after loading the return value the function returns.

Now while setting breakpoints, remember not to use addresses or offsets from the objdump(if you do remember to subtract start addr of the function from the offset of the instruction). Or better yet you can use the disassembly output from GDB:

$ gdb /lib/modules/2.6.9-67.EL/kernel/drivers/s390/block/dasd_eckd_mod.ko

(gdb) disassemble dasd_eckd_build_cp
Dump of assembler code for function dasd_eckd_build_cp:
0x0000000000001ef0 : stmg %r6,%r15,72(%r15)
0x0000000000001ef6 : larl %r13,0x1ef6
0x0000000000001efc : lgr %r14,%r15
0x0000000000001f00 : aghi %r15,-448
0x0000000000001f04 : larl %r12,0x1f04
0x0000000000001f0a : stg %r14,152(%r15)
0x0000000000001f10 : stg %r3,224(%r15)
0x0000000000001f16 : stg %r2,216(%r15)
0x0000000000001f1c : mvi 359(%r15),134
0x0000000000001f20 : lg %r1,16(%r3)
0x0000000000001f26 : tml %r1,1
0x0000000000001f2a : mvc 232(8,%r15),88(%r2)
0x0000000000001f30 : je 0x1f40
0x0000000000001f34 : tml %r1,1
0x0000000000001f38 : mvi 359(%r15),133
0x0000000000001f3c : je 0x29fe
0x0000000000001f40 : lg %r2,232(%r15)
.....
.....

Remember that the offsets here are in decimal not in hex.

0x00000000000029fe : lghi %r2,-22

OK, so as you can see we are jumping to 0x29fe which is the above instruction for which the offset is 2830. So find out the address for dasd_eckd_build_cp from kallsyms

cat /proc/kallsyms > kallsyms.

Here it is 0x000000006088f27c, so we place a breakpoint on /x 0x000000006088f27c+ 2830

Go to the CP console and all the rest is to be done here:

#CP TR I PSWA 6088fd8a

- this will set a trace point at given address.

The reason we should not set a tracepoint on the above jump instructions is because we see many false positives and that makes debugging impossible.

You can query all tracepoints using #CP Q TR.

When you hit a tracepoint, you can do the following to continue #CP B

If you want to single step: #CP TR I
followed by #CP B
Each "#CP B" will take you forward one instruction.

When you want to end tracing and remove all traces do #CP TR END

If you want to break within a function address range:
#CP TR I R .

eg. #CP TR I R 6088f27c.200

To display registers you can use:
#CP D G - group registers
#CP D X - control registers
#CP D AR - access registers
#CP D PSW - display PSW

After insmod'ing a reproducer module, with the trace on - we hit the instruction which returns EINVAL thereby confirming the problem.

Friday, April 24, 2009

Overwhelmed

Last week on 17th April was our first marriage anniversary. The last year has easily been the best of my life - filled with love, affection, laughter and lots of success for both of us. We definitely planned to celebrate it in a big way and Goa was an ideal place for a romantic getaway.

We stayed a Lemon Tree Amarante hotel and the stay was comfortable and leisurely. With the beach just about 500 meters away and an awesome pool(with a pool bar), this is just about the ideal Goa hotel. It was great fun to watch DVDs on the LCD TV provided in each room.


The Goa beaches were full of life with lots of shacks providing awesome cocktails and really tasty food. There was a cool breeze soothing us all through the day. Chilling out in each others company on the beach for hours together was really memorable.


The icing on the cake were the cute and brilliant gifts given to me by Komal. She gave me all of four gifts, each one better than the other:


A very nice photo of ours engraved in a crystal. The platform on which it is elevated beautifies it with different colors being dispersed.







XMini capsule speakers

A very compact and cute speaker for our iTouch with surprisingly good sound quality for its size. It has a cool battery life of 8 hours and can also be charged through USB port.






A magnificent diamond studded gold pendant with our initials. This design is Komal's handiwork - really creative.







The tale of our life as a conglomeration of pictures. Komal blew me away with her artistic skills and creativity with this one. She must have spent long hours making this amazing piece of work. Though to be honest, its beauty is not reflected in these pics - it looks much more beautiful in real life.








Cute collage of my pics when I was little and was actually sweet.










Collage of Komal's college life pics. Looking beautiful wifu!












A great pic with awesome expressions!











Dinner at the "Beach House" restaurant at the Taj. The ambience was exhilarating - overlooking the sea, with a beautiful breeze and awesome food. Gotta go and stay at Taj sometime - its a different experience which cannot be put into words.



Monday, April 20, 2009

Oracle buys Sun

That Sun was putting itself out in the market and someone would step up to pay a decent price was written in stone. And finally it has happened with Oracle buying Sun.

Personally I think this is a good deal for Sun. Before the talks with IBM, Sun stock was trading around $3-4 and now they are being sold at $9.50 per share. That is a very good deal for the investors and Sun employees. IBM and Sun had many similar products and would certainly have resulted in job losses. Oracle is flush with money and will surely continue to invest in Sun products since there is little overlap.

Some of my interesting thoughts about this deal:
  • Oracle must be loving the thought of owning Java since its the crucial glue in their software offerings.
  • Contrary to popular perception, Oracle won't kill MySQL - instead they will continue investing in it so that they get an entry in the low-end database market. Customers can then be upgraded to Oracle as they grow - a win-win situation for Oracle. Note that since MySQL is open-source if Oracle won't develop it, someone else surely will.
  • Oracle now has access to the entire storage stack - Oracle database running on Java, ZFS, Sun hardware, Oracle backed Linux. That is a pretty strong offering.
  • Every M&A deal brings with it some amount of layoffs, resignations, cultural changes but its this very churn that Sun has needed since a long time. Majority of the people will come of smiling eventually.

Saturday, April 11, 2009

Don't vote for independents and regional parties

I have a fervent plea to all voters - please *DO NOT* vote for independents or regional parties for next 10 years or so. Even if these candidates may be honest, hard-working, result-oriented - please keep them out of power.

I know this logic seems inverted and downright stupid for the uninitiated. But think about it - the only way we can have a strong government is if one of the national parties receives a majority. Only then can the government go forward with development agenda instead of catering to the whims of individuals or factions. I personally don't care if its the BJP or the Congress which gets the majority - come to think of it both of them have tried to do a good job since 1998. BJP did this nation great service by going ahead with the nuclear tests and opening up the economy - in fact it laid the ground-work for the boom we saw post 2004. Congress also tried to make positive reforms in various sectors like insurance, telecom, nuclear deal, etc. but the fear of the Left kept the Congress from doing a stellar job.


Right now the national parties spend half their time hob-knobbing with their so-called allies and only the remainder of the time is given to the people. And I don't blame them. We as a people have failed if we vote on the following basis:
  • caste - needless to say majority of votes are given/taken on the basis of caste and the percentage of people voting on the basis of caste must start reducing
  • regional pride - Vote for the national government on the basis of regionalism is downright stupid. A good independent legislator is *useless*. He cannot push through reforms alone. Democratic governments run on collective ideology, not personal whims. So while it may *seem* like a good idea to vote for a independent candidate with a clean profile, it doesn't help you eventually.
  • Short term promises - The only promises that should be respected are those that have the potential of empowerment - education, skill training, long term infrastructure projects through government spending, making bureaucracy accountable, transparent governance. Promises like reservations, subsidies, freebies only weaken the populace as I have indicated in my previous post - though I must admit it will look attractive to the minimum wage earner.

So my request to you is very simple - look who among BJP or Congress candidate in your region is more better and vote for him.

Ok, now for the skeptics: you will very rightly say that this cannot go on forever. You are right - we should do this only for this and the next election. And hopefully until then the national parties would have given a good account of themselves. Hopefully by then parties like Shiv Sena, MNS, Left, AIADMK, Trinamool, BSP, BJD, etc. would be relegated to the side-lines and we will have successive strong governments at the centre taking India where it should be.

Tuesday, April 7, 2009

NFS tricks

Today while trying to use NFS between different OSes I hit a few problems that I had not seen before. I am listing them down here:

- Since the NFS server had SunOS and client had Linux, they were speaking different NFS protocols. In this case "-o ver=3" option needs to be used so that NFS v3 is used as the protocol.
- The uid and gid on the server and client was not the same and hence permission was being denied. Either I would have to create same UID/GIDs on both server and client, or allow rwx access to all users on the server. I had to choose the latter, since I do not have root access on the server.
- While trying to untar, I hit this error "cannot change ownership to uid X, gid X". This was because while untarring the UID/GID information was being preserved and these were not present on the client. You can get around this by using --no-same-premissions argument for tar.

Wednesday, April 1, 2009

Of the poor, by the poor, for the poor

I came across these comments made by Rahul Gandhi and they made me very angry. Lets consider what all our government already does for the poor:

- subsidized food grains, kerosene
- subsidized petrol, diesel costing approximately 1.3 lakh crore rupees last year
- subsidized work culture since government employees need not be afraid of being fired for not meeting work standards
- farm subsidies including no taxation on agricultural income, free electricity, government regulated fertilizer prices, free power and guess what - free money in the form of loan waivers
- subsidized/free healthcare at government hospitals
- subsidies in form of ailing and loss making government companies

At first glance our government comes across as very noble and seemingly the state of the poor people should be alleviated with all this. Then why have these long drawn policies not led to substantial reduction in our poverty rate? Infact when Nehru was absolutely pro-poor(to the extent of being anti-rich) our growth rate was at its lowest. What are the reasons for this anti-thesis? While many posts can be written on the reasons, I will start with the most basic ones:

  • any robust economy is driven my a minority of the population. This means that in a country of 1+ billion people you cannot expect 70% of the people to be rich/middle class. Whenever the crest of growth of a country is reached, the consumption flattens and leads to deflation - many European countries and Japan is a good example of this. Currently only 3% of the Indian population pays their taxes. Government should try to increase their tax net to 10-20% over next 5 years. And it is this 10-20% which will be driver of the economy creating enough consumption demand to help alleviate poverty of the rest.
Lesson: stop throwing more good money behind bad money. Instead start new projects which pays for services _in lieu_ for something atleast.

  • Huge GDP deficits: Insane government spending mostly in form of subsidies causes large GDP deficits. This means that country has spent more money than it has. Every government _raises_ such money by issuing bonds which are bought by banks, other governments, etc. The more your GDP deficit, the more interest you need to pay because your credit rating gets lowered due to heavy borrowing. This increases governments cost of expenditure and slows down growth in the long term.
  • "false demand": I would stick my neck out and say that if we had lower subsidies we would be much less affected by the current economic crisis. Reason? Any economic cycle consists of cyclical booms and busts. Important thing is to throttle growth when it is driven by credit money. While the international fuel prices were going through the roof, demand in India wad at its peak because prices here were unaffected. Then suddenly the government realized that country's economy will crumble if we keep subsidizing oil and hence the prices were increased suddenly. This jolts the economy. If prices are decided by market fluctuations then demand also gets throttled at the right time. This way while growth may be slower, it lasts longer and we certainly would not need to see 50% stock market drop in 6 months.
  • Giving freebies _for a lifetime_ sets a bad habit among the population. What is being implicitly said here is that if you are ready to accept a poor life we will feed you and keep you alive. We want to empower people, provide them with skills or education and until such time that they are learning you can shower them with freebies but please not forever.