The right kind of project management methodology can do wonders in software development. However, when it comes to choosing one, people would want a lot of questions answered.

What is this Agile methodology we’ve been hearing about?

I’ve heard it’s quite similar to Lean methodology. Is it?

Does Agile actually mean Scrum?

Agile and Lean have been around for a very long time. But choosing one of two has always been under debate. There is a relationship between the two which is often misunderstood.

This is just a simple analysis of what each means, and how they are connected. But first, their history…


Originally derived from ‘Lean Manufacturing’.

Lean Manufacturing, according to Wikipedia, is a method devised for the elimination of waste within a manufacturing system.

The Lean methodology, from a software project management perspective, is basically a set of principles that will help achieve speed, quality and customer satisfaction.

You may have heard something similar about Agile development methodology as well. Therein lies the source of confusion.


Back in the days, there were some hardcore methodologies in software development. They were popular all right, but kind of defeated their own purpose. How?

You see, these methodologies started stultifying software projects, preventing them from providing the result they were meant to. Software projects were meant to create software that helped the customer. With those methodologies, things went sideways. At that time, the Agile Manifesto was formulated as a reaction….or better yet – a solution.

So Agile basically refers to the principles proposed in the Manifesto.

The confusion

A woman named Mary Poppendieck who had worked in a manufacturing plant, worked with her husband Tom Poppendieck, a software developer, to figure out a way to adopt some manufacturing principles in the software realm. That intellectual concoction, ladies and gentlemen, is what we call Lean.

Interestingly, Mary was also the founding board member of the Agile Alliance, that introduced the Agile concept. Strong lean-influenced ideas obviously had a key role in the origin of the Agile Manifesto.

Lean and Agile

History lesson is over. Time to focus on the topic at hand.

So the ideas from Lean manufacturing has been applied to software development. This kind of defines the way Agile works. Hence the similarity. Both Lean and Agile is based on a combination of customer-centric approach and adaptive planning. As a matter of fact, Lean manufacturing did influence Agile enthusiasts or Agilists as they are commonly called.

Lean is about eliminating waste right? So in the software realm, the idea is to eliminate anything that doesn’t have value, and work on only that which is absolutely necessary. That ‘anything’ could include documentations, meetings, certain tasks etc. Inefficient ways of working will also be eliminated, thereby delivering results faster.

Here’s what one can derive from Mary Poppendieck’s words.

Lean = Rapid Response + Quality + Discipline + Speed-to-market

Agile, on its own, focusses on short iterations. A functional software will be delivered in the end after thorough testing, scrums, sprints and a lot of other facets.

The Poppendiecks’ approach was to blend Lean with Agile. This Lean-Agile combo will include the iterations of Agile development, and the validation practices of Lean. They are deeply entwined when applied to a software development environment. One isn’t exactly an alternative to the other. So, in a nutshell, if you are using Agile, you are using Lean as well…and vice versa.

The point of interest is where you decide to use the ideas from Lean manufacturing in your Agile methodology. So there isn’t actually a victor in this face-off.

To conclude, there isn’t a big difference between two except at the core. Lean software development focusses on elimination of waste so as to improve the processes, while Agile software development methodologies adhere to the principles in the Agile Manifesto (eg: Scrum).

With that said, a good Agile development team will adopt the best technical and management practices (which will include the principles of Lean as well) that work best for them, and leave their customers satisfied in the end.

Written by: Prashant Thomas

“Bob: So, how do I query the database?
IT guy: It’s not a database. It’s a Key-Value store. . . .
You write a distributed map-reduce function in Erlang.
Bob: Did you just tell me to go **** myself?
IT guy: I believe I did, Bob.”
— Fault Tolerance cartoon, @jrecursive, 2009



Relational databases have been around for a while now. It was necessitated with the emergence of web technologies. From its advance in 1995 to its cusp in 2005, it remained a stable and more or less the center piece of the web the revolution. But behind the scenes things were churning, especially with the arrival of web 2.0 and and the need of massive processing capability for big data. This was the age of Amazon, the largest retail operator of the time, arguably with a huge web presence. While web 1.0 was a collection of statically linked pages, web 2.0 was all about dynamic content and its necessity to search and index these pages with transnational capabilities

Amazon, in it’s early days used Common Gateway Interface (CGI) to facilitate user interaction. CGI allowed an HTTP request to invoke a script rather than display a HTML page. Scripts written in pearl were used to access the database and generate pages on the fly. As technology progressed CGI gave way to frameworks such as Java’s J2EE and ASP.NET along with PHP (that followed the CGI model). Despite these advances the basic pattern for for data access, retrieval and rendering of dynamic pages remained unchanged.

At this juncture, scaling was not a big issue, a bottle neck in the client /server or web/server layer could as easily be fixed by piling on more internet servers to meet the rising demands in traffic. However fixing a bottleneck at the database layer was not so simple. Like the web / server fix, in the early days , these issues were fixed by upgrading to the latest and greatest hardware, operating systems and databases and what have you.

  • With the crash of the internet bubble, two realities came into play
  • Indefinite expenses in scaling up to the latest and greatest was no longer viable and economical
    Startups, came into being and they needed a more realistic solution, one that involved scaling up from a pint sized infrastructure to the potential of meeting a a global market as the companies grew


The Open-source Solution

Following the crash, open-source software became increasingly valued within Web 2.0 operations. Linux supplanted proprietary UNIX as the operating system of choice, and the Apache web server became dominant. During this period, MySQL overtook Oracle as the Database Management System (DBMS) of choice for website development.

MySQL is less scalable than Oracle & generally runs on less powerful hardware and is not very good at taking advantage of multi-core processors, but the development community came up with some nifty tricks to enhance the value of MySQL.

A technology called Memcached was developed to prevent database access as much as possible. Memcached is an open-source utility that provides a distributed object cache. This allowed for Object-oriented languages to cache objects whose information spanned across multiple tables as objects in memory that could be stored and accessed across multiple servers. By reading from these servers rather than the database, the load on the database could be reduced.

Web developers took advantage of MySQL replication. Replication allows changes to one database to be copied to another database. Read requests could be directed to any one of these replica databases. Write operations still had to go to the master database however, because master-to-master replication was not possible. However, in a typical database application – and particularly in web applications – reads significantly outnumber writes, so the read replication strategy made sense.

Memcached Servers and Replication

Figure 1 illustrates the transition from single web server and database server to multiple web servers, Memcached servers, and read-only database replicas.

Memcached and read replication increased the overall capacity of MySQL-based web applications dramatically. While the read capabilities of a system can be improved using these methods, database write activities required a more dramatic solution.


Sharding allows a logical database to be partitioned across multiple physical servers.
In a sharded application, the largest tables are partitioned across multiple database servers. Each partition is referred to as a shard. This partitioning is based on a Key Value, such as a user ID. When operating on a particular record, the application must determine which shard will contain the data and then send the SQL to the appropriate server. Sharding is a solution used in large scale applications like Twitter and Facebook.

Memcache with Sharding

In this example, there are three shards, and for simplicity’s sake, the shards are labeled by first letter of the primary key. As a result, we might imagine that rows with the key GUY are in shard 2, while key BOB would be allocated to shard 1. In practice, it is more likely that the primary key would be hashed to ensure even distribution of keys to servers.

Sharding involves significant operational complexities and compromises, but it is a proven technique for achieving data processing on a massive scale. Sharding is simple in concept but incredibly complex in practice. The application must contain logic that understands the location of any particular piece of data and the logic to route requests to the correct shard. Sharding is usually associated with rapid growth, so this routing needs to be dynamic. Requests that can only be satisfied by accessing more than one shard thus need complex coding as well, whereas on a nonsharded database a single SQL statement might suffice.

The Shards of Complexity (Limitations)

Sharding together with caching and replication is arguably the only way to scale a relational database to massive web use. However, the operational costs of sharding are huge. Among the drawbacks of a sharding strategy are:

  • Application complexity – It’s up to the application code to route SQL requests to the correct shard. In a statically sharded database, this would be hard enough; however, most massive websites are adding shards as they grow, which means that a dynamic routing layer must be implemented. This layer is often in addition to complex code being required to maintain Memcached object copies and to differentiate between the master database and read-only replicas.
  • Crippled SQL – In a sharded database, it is not possible to issue a SQL statement that operates across shards. This usually means that SQL statements are limited to rowlevel access. Joins across shards cannot be implemented, nor can aggregate GROUP BY operations. This means, in effect, that only programmers can query the database as a whole.
  • Loss of transactional integrity – ACID transactions against multiple shards are not possible—or at least not practical. It is possible in theory to implement transactions across databases in some database systems—those supporting Two Phase Commit (2PC) – but in practice this creates problems for conflict resolution, can create bottlenecks, has issues for MySQL, and is rarely implemented.
  • Operational complexity – Load balancing across shards becomes extremely problematic. Adding new shards requires a complex rebalancing of data. Changing the database schema also requires a rolling operation across all the shards, resulting in transitory inconsistencies in the schema. In short, a sharded database entails a huge amount of operational effort and administrator skill.
Written by: Prashant Thomas

So what’s the easiest way to find a hosting provider for your website?

People normally look for easy ways to do things. Internet made finding things easy. Google made it easier. So right now, the easiest way to find a web hosting solution is to google it. You can search for either ‘the cheapest hosting provider’ or ‘best hosting provider’ or something like that. But the catch is that what you find might not always be the best deal for you.

When it comes to hosting your website which potentially carries the future of your business in it, you should be a bit more careful.

Finding a hosting provider is easy. Finding the right one is a challenge.

If you can nail the right hosting provider, your website will be able to take your business to new heights. Downtimes, slow performance etc. won’t be blocking your way.

Rule 101 to nail the right hosting provider – Do not settle for anything less than that which will truly get the most out of your website.

Here’s one way to find the right provider – Ask questions. Their answers will tell you whether they are good as they claim to be.

1. What’s your security policy?

Their answer should include the security measures they employ to protect the data. Each type of hosting package, may it be shared, cloud or dedicated, will have security features unique to that package. They should be explaining it specifically. You should also ask them how often they do anti-malware/anti-virus scans.

If you are running an eCommerce website or any website that handles sensitive customer data, an SSL certificate is recommended. Those hosting providers who offer SSL certificates should make their way up your list.

2. How reliable are your servers and what can you say about your uptime?

When you approach hosting providers for their services, your catchphrase should be “Downtime is a No No”. After that you can ask them about their server reliability. Their answers should indicate that they have stable network connections with a 99% or more uptime. Below 99% uptime is not recommended. A web hosting operating 24×7 with an uptime score of 99.5% is a great deal.

3. What’s your backup policy?

You may have heard of hosting companies offering regular backups. When you are about to hire one, you should ask them to define that ‘regular’ they are promising. Many companies back up the data once a day. Some more than once. But there are hosts that don’t actually perform daily backs. This question will let you know which category the host in question belongs to.

4. What if there’s a power outage? Will you be accountable for it?

Obviously there will be Yes and No answers. Regardless of what the answer is, you should keep pushing by asking them to explain. If the answer is and when they start explaining, inspect when the company holds themselves responsible and when they don’t.

It’s also wise to ask them if they can be flexible enough to amend this clause if necessary. This could be useful when there is a power outage due to factors beyond the host’s control. Ask them if they would charge you in such a scenario.

5. Is the service scalable?

Depending on the type of your business, your web hosting requirements will vary. Some could do well with a shared hosting plan while others may require dedicated servers. Ask the host about the web hosting services available, and then ask them about their scalability.

There could be policies for scaling up or down. If the hosting provider doesn’t give a lot of hosting options, you may have to leave your current host and find a new one that can properly accommodate your website without affecting its performance. Depending on whether your business shrinks or grows, you will need to scale the hosting plan appropriately.

This is why it’s best to approach a host who provides a range of hosting options that you can upgrade or downgrade to depending on your business requirements.

6. What are your customer support policies?

This is a question that you can ask both the host and their present or former customers. Technical support and timely customer service are very important factors that define the quality of a host’s service.

They should be quick to respond should their clients need help resolving a technical issue. Reliable hosting companies offer 24×7 customer support including holidays. Make sure to ask them about their response and resolution time.

Other customers of the host can give you a review of the host’s quality. Also, don’t forget to ask them how you can reach out to them when necessary.

And now for the last question

7. What if I am not satisfied with your service?

Ask the host if they have a trial package for you to test their service. If they don’t, enquire about their customer satisfaction policies and whether they provide any kind of guarantee for their services.

If you are dissatisfied after you start using their services, and you want to migrate to a different hosting provider, you need to make sure the hosting provider wouldn’t complicate the migration. You should know if they offer a refund policy and how they can help you move your stuff out to a different server.


As I mentioned before, it really is challenging to find the right hosting provider. The results are worth the time you spent researching. These questions can help you identify the host that’d be perfect for your business. However, it doesn’t mean that these are the only things you can ask a host related to the services they offer.

If you have more questions, feel free to ask them. They are obligated to answer your queries. Don’t take a call trusting their words alone. Check the feedbacks of their customers as well (Plan B). Good luck.

Written by: Shibu Kumar

The world of online business can be ruthless at times though it grants huge payouts for businesses that do it right. ‘Survival of the fittest’ applies here as well. But then again, there are startups that just grow at an incredible rate. So obviously there should be a formula for that kind of success.

The basic survival kit for businesses in the digital world

For a business to survive in the wild, harsh environments of the digital realm, it requires

  • An appealing website
  • A dependable web hosting provider
  • Excellent marketing and SEO strategies
  • Patience

Now, the first one in the kit is a no brainer. A website is basically the business’ identity in the digital world. It generates revenue, acquires more customers, retains them, and is capable of more. But there is a catch. For the website to be effective, it needs a home – a place where it will be secure and nourished.

This is where the second item in the kit comes into play.

(The third and fourth item in the kit are another story for another time)

A Dependable Hosting Company

Dependable can mean anything good, like reliable, safe, supportive, secure, robust…

The company makes sure you have the right web hosting service for your business so that your website is at its best at all times. This means great server response times, excellent uptime, adequate bandwidth and storage, 24/7 customer support etc. But these are all obvious facts anyone familiar with web hosting would know.

So what does it mean for your business if the hosting company is not that dependable? For starters, surviving the online business world becomes difficult.

Your website becomes vulnerable to a number of threats.

Threat to Data Integrity

Business emails, financial transactions, customer feedbacks, customer credentials… These are all data that feeds a business. In the case of your business, you could use these data for a lot of things from improving operations to optimizing business strategies. You don’t want this data to fall into the wrong hands or lose it to malicious programs like viruses or malware.

A DDOS (distributed denial of service) attack on the server which overcomes even the firewall means chaos. Even the administrator won’t be able to access the data in most cases. A hacked server which automatically keeps sending spammy emails is another example. The email service provider might block the DNS server from where the spammy emails originate. This means authorized account holders won’t be able to send mails.

If your hosting company isn’t competent, this could ruin them and your business in no time.

Physical Threats

The host will have the servers secured at remote data centers in most cases, promising that your data will be safe. But there are a lot of hosting providers out there who still doesn’t have the right security measures to prevent unauthorized access to their data centers.

A dependable hosting company would have 24/7 physical security, biometric access and deployable emergency protocols to protect this data. A data center without adequate security means your data is accessible to those who have access to the data center.

Server-related Threats

This is something you should give a lot of importance to. The server is supposed to give 100% uptime for websites. But that’s technically not possible. A reliable host server will offer an uptime of 99% or higher. Anything lower than that is looked at with doubt in the industry.

Apart from this, the server may experience a lot of technical issues. You would want your host to resolve the issue immediately before it starts to affect your website’s performance. An unreliable hosting company means you can’t expect them to fix it all in time.

A dependable hosting provider on the other hand will have a team of professionals ready at all times to answer your calls and troubleshoot the issues.


A dependable web hosting provider works akin to a bank. Banks secure your valuables in their lockers. There will be many lockers. But a user can unlock only those lockers that he’s authorized to unlock. Security protocols ensure that. The bank is responsible for providing that level of security to safeguard those valuables. A dependable host does the same.

Even in a dedicated server hosting solution where the server houses only your website, and there is no other user in the server who can access your website’s data, there should be security mechanisms to secure those data should someone try to access it externally. A dependable host will have that.

So basically, the difference between a dependable host and an undependable one is that the former ‘will’ have everything required to keep your website and its data safe and secure, while the latter ‘should’ have those things to fulfill the same purpose. ‘Will’ and ‘Should’.

So back to our topic…

What is the future of your business without a dependable hosting company?



Start with a dependable hosting company by your side, and you will only have to focus on your business, and not on keeping your website intact.

Written by: Shibu Kumar

One of those many problems businesses face online! A nightmare that gives webmasters a run for their money! A vexing interruption for web hosting providers while they try to be focused on providing technical support to their clients.

Your website can get blacklisted for a lot of reasons. Google does it too. Too often. An estimated average of 10000+ websites get blacklisted daily by the search engine, mostly because of malware infections, exposing sensitive information etc.

So what happens once a website gets blacklisted?

The website owner will panic. The web hosting provider will have their hands full cleaning up the mess and getting the site back to running condition. This can be a bigger problem for businesses that don’t have advanced security specialists or monitoring tools for their website. In many cases, they won’t even realize that their websites have been blacklisted until it’s too late.

Analyzing the problem

Let’s consider the case of those businesses that can’t afford to implement security measures good enough to alert them when their sites get blacklisted. Most of the time, the owners come to know of the mishap after getting alerts from their browser or search engine when they try to access their sites. And chaos ensues.

Once a website gets blacklisted, time is of the essence. For businesses, every minute lost with a blacklisted website could mean not only lost revenue but also damage to their reputation. SMBs and startups suffer more as they won’t normally have wallets heavy enough to get through the ordeal.

A blacklisted website might lose its organic traffic from marketing which in turn negatively impacts the sales. As for the web host, the situation stands to undermine their credibility.

Fixing the problem

If it’s because of a malware infection, it could take more than a few hours, maybe even days, to remove it completely and cleanse the website. And that depends on the severity of the infection, and whether the website has an effective backup mechanism.

Removing infections and restoring backup is just step 1. The second step involves convincing Google to ‘unblock’ the website.

A simple 2 step fix right? Easier said than done.

The unblocking process may take hours.

Where web hosting providers come in

They have their work cut out for them. A blacklisted website of one of their customers can impact the providers’ business as well. Generally, many website owners tend to think it’s the hosting platform’s fault that they can’t access their website, when the website is blacklisted. Regardless of the reason for getting blacklisted, it wouldn’t be easy for hosting providers to get the website off the blacklist.

Reliable web hosting providers may have a plan to follow in case of such emergencies, to remediate the problem and minimize the damage. There are a lot of effective tools providers can use to get the de-blacklisting process done as fast as possible, thereby garnering the trust of clients. Good hosts won’t be using a lot of resources while fixing the problem.

Blacklisting problem can be a mess but it won’t be giving anyone a hard time if the site owners use powerful backup tools. They can restore the website to the way it was, with adequate support from the web host.

To remediate the problem quickly and efficiently, and have the websites back in action, the hosting providers should be aware of the following:

  • Using good antivirus programs to quarantine malicious files from on administrators’ systems.
  • Change passwords for everything that presently uses a password including logins, FTP, CMS accounts, databases etc.
  • Make sure clients have installed latest versions of the applications and software they use including the OS.
  • Delete all modified files added to the server after the issue was first identified. After deleting all the files, perform the complete system restore. Cloud-based backup/recovery features make it easier for hosting providers.
  • Request Google to review the website again and remove the website from the blacklist. Google Webmaster Tools is required for this.

There are many threats to a website. Getting blacklisted is just one of them. This should make it clear why it’s important for every web host and site owner to have tools at the ready to get their websites and their data back online, should a mishap occur.

Written by: Shibu Kumar
Page 29 of 42« First...10202829303140...Last »