Social Icons

Pages

lundi 17 décembre 2012

Compliance is core business

Compliance is a mandatory pain

Most often, compliance projects are perceived as a mandatory pain. The strict minimal effort is spent to comply with what is required by the regulator. Even if those projects are categorized as mandatory, they are not dragging the same kind of attention as revenue generating projects. What matters is that those projects are completed, costing less as possible.

This behavior impacts the way those projects are managed: instead of looking wider, trying to generate value mobilizing best business contributors, those projects are treated in a narrow view, which of course (no magic !) is not bringing any value. Therefore money is spend just for the sake of being compliant.

There is always something positive in a change

It is said that there is always something positive to be found in any change, even in one enforced by regulators. Therefore, one could also find some positive aspects to those mandatory projects which could also bring value and not only costs.

Let's take an example to illustrate what is said above: Business Continuity Plan. This is a mandatory plan that banks have to elaborate and maintain in order to ensure continuity of their businesses in case any type of problem (strike, power outage, flood, IT outage, metropolitan issue).

BCP is rarely a core process

Even if past has unfortunately shown that such worse case scenarios were not science fiction (9.11), putting this in place is still considered as a mandatory pain by many actors.

This attitude reflects in the way those projects are managed: not considered as core, they are often delegated to a team outside the business and communication and mobilization of this theme are pretty poor. The plan is usually quite basic with little value added and testing, which has to be done on a regular basis, is executed poorly, no one fully understanding what is meant to be done.

Stepping back for a while, one could think of positive outcomes coming from a BCP process : a sound BCP process means that the whole company is sustainable, being able to face major problems, and that customers can safely leave their money in their account without risking to loose it or not able to access it anymore. An outstanding BCP plan can be more than a cost; it can also bring a competitive advantage in communicating on this topic internally and externally.

Sustainability should become a key value

Bringing sustainability as a key value for the company means that internal and external communication will include that theme and that BCP projects will become visible not only because they are forced by regulators but also because they become core business. This will generate a stronger involvement from all parties, more ideas, a smarter plan and a better use of resources.

Instead of being seen as separate, those processes becoming core will also transform the way business is operating. An example of thinking more globally could be that instead of having a separate local provider substituting to the original one in case of emergency, one could think of sharing other existing resources located abroad, reshaping the regular process so that daily operations are shared among the two locations. At the end of the day, sustainability and processes are improved, potentially without generating additional costs.

Of course, this is a basic example, just to illustrate the benefit of considering sustainability and compliance more generally not as being aside but part of core business.


Better for the banking industry to anticipate this move as sustainability is more and more becoming a key topic for regulators. The ones making the move first will be the only ones getting benefit from it.

dimanche 9 décembre 2012

Capital markets IT


An incredible growth

In the last 20 years, the very strong growth on capital market has triggered huge investments. For some companies strategy was pretty straightforward: any product, anywhere! Focus was definitely on growth and not on costs. The priority was to grab new market shares and revenues not to save cost or improve efficiency.

The direct consequence of this strategy has been a huge size increase as shown in the
following graph:

Goldman Sachs employees (Source Goldman Sachs)

Management capacity had to grow accordingly

This growth has not been possible without increasing management capacity. Going on with the Goldman Sachs example, if we consider that a team is composed of an average of 6 teammates, the growth in the graph above correspond to an increase of more than half a hierarchical level!

As experienced managers were not so common within Capital Markets IT, a lot of experienced managers, able to structure teams to sustain this growth have been coming from other industries or even other areas than IT.

Since the crisis, innovation and revenues race is over

The financial crisis is bringing a new situation: growth race is over, cost is the main focus. This is forcing the financial industry to move back cutting costs and jobs. This is where the main challenge is: how to scale down regaining efficiency? Will the managers that have been managing the scale up be able to manage the scale down?

This deep transformation is very difficult to trigger and is coming from top management which, seeing smaller or new competitors entering in the market, may realize that a deep transformation is mandatory. This transformation means cutting hierarchical levels building smaller teams, more agile and more efficient. 

Processes need also to be revised to be adapted to the new size, internal invoicing for example does not need the same effort for a smaller setup!

 The most challenging part is not to cut cost at the same efficiency but is to improve efficiency while cutting costs. This means for IT identifying and retaining the best profiles that are making a true difference and setting up with those profiles small and agile teams. One can understand how different this approach is from the past, where quantity was more in focus than quality.

mercredi 5 décembre 2012

IT and unmanaged processes


Quite commonly organizations have some undefined or unmanaged processes. This is giving the impression to C level that only IT enforced processes are working fine.
Then, to feel more comfortable, top or senior management triggers projects to automate those unclear processes.
Very often this IT enforcement is not mandatory : having a proper process definition and performing the proper checks would often be sufficient and by far cheaper!

IT is not the only way to setup a process

A little anecdote on this topic : a company was struggling in setting up a sensitive process dealing with revenue recognition between production and sales. As it was not working, it was decided to get it automated (!). IT has been called and implemented a process at unitary deal level creating an IT an administrative monster. Production and sales were fighting on each transaction blocking the whole process.
Stepping back for a while, it was clear that this was the wrong decision and a more manual approach was taken : a weekly report was produced and both parties were invited to discuss and sign it off. The sign-off was recorded in a basic document workflow making the agreement official.
This simplistic approach paid off : due to netting effect, production and sales were much more in sync, talking to each other every week instead of fighting through systems, also creating a smoother relationship.

 IT is not meant to be a change agent

Another impact of trying to automate unclear processes is that IT is being put in a difficult role: IT is becoming the change agent. Instead of managing the required change in the process from the business side, this change is becoming all of a sudden a change forced by IT. Business managers can escape from their responsabilities in managing the change letting IT doing it. This is not so comfortable for IT teams that have to manage the process clarification, its implementation and the change management.
Also, the small example above is telling that brutal force is not helpful. Enforcing systematically processes implementation through systems is not always bringing value although it is sometimes mandatory (managing transaction status, product lifecycle, purchase orders, …).

A lightweight approach can also be considered espacially when processes are unclear. IT is then not any more in the driving seat which gives back to business the change responsibility.

dimanche 2 décembre 2012

Incident management

It is quite obvious that Enterprise IT systems are complex and that reliability is a day to day fight. To overcome those difficulties, best practices such as ITIL have been deployed introducing state of art processes.

Best practices are not only recipes

Looking more specifically at incident management, which is key for complex systems, a lot of those best practices have been introduced which is a very important move. Nevertheless, what can be observed is that putting in place such processes is sometimes done as following a recipe. There is not always a deep understanding of the reasons why the process as been defined in such a way. Best practices are for sure useful as they provide a short cut but they should not prevent us to think about the essence of what we are doing.

IT Operation should not only defend itself

By definition, for IT operation, the best possible result is to be invisible: if everything works perfectly, then users are not facing outages and are not realizing the huge effort produced on the IT Operation side to achieve this result. Therefore, each outage is only bringing negative impact on IT operation. In this context, incident management and KPIs are seen as a defensive tool to prove that the system is working better that one could imagine. Tools and recipes coming from ITIL are applied giving some kind of label to this defensive strategy.

Understanding the root cause is key

Instead of sweeping durst under the carpet and presenting a defensive approach, there is another possible way: implement those processes and tools in a proactive mode as a means to better understand what is going on. What is important is not the KPI by it self but the comment which goes with it. What is important is not the incident, but to ensure that this incident is not occurring again because root cause has been neglected.
Real improvements are coming when deep analysis is made on the root cause of an incident. This is not easy, sometimes even impossible. It is taking time, potentially slowing down other processes but it is paying off. Of course, IT operation teams in order to look good will try to restart the system asap paying less attention to what caused the issue. This natural inclination should be discouraged by management, forcing an analysis to be performed and an action plan to be defined and executed.

Incident and KPIs reports are communication means

This kind of true transparency from IT operation is demonstrating to users that the IT team is taking care and responsibility. Setting up commented incident reports and monthly KPIs is also giving visibility to IT operation which is not any more the invisible department one can easily outsource. Producing commented KPIs mixing technical and business matters is demonstrating that IT operation is clearly servicing the business bringing its contribution to business development.

mardi 27 novembre 2012

Service integration is key for agility

I posted 3 years ago a similar message on this blog. Looking at the different conversations around, I think it is still quite valid. To give it a larger audience, I am reposting it in English

Service integration has been there for a while

Without looking back to IT origins, IT integration has been available for many years : Remote Procedure Call (RPC) for example was proposing some help to integrate systems together.
This proposal has been improved overtime with DCE, Corba, J2EE, Webservices and now cloud APIs

Service integration brings a complete different proposition from data integration which has been used for ages and remains still very popular. It consists in integrating ready to use services instead of integrating raw data. Data complexity is then hidden behind a service. This layered architecture is bringing a lot of benefits out of which reuse is key.

Reuse is the key word

Reuse in IT is a quest of Grail. Over the years many attempts have been seen, most of them being unsuccessful. What service integration is bringing is that systems can be much more independent, cloud bringing this a step further, where the outsourced service is completely independent from the application which is using it. 
This is facilitating integration because the two systems to have no in depth knowledge of data representation, they share a documented well know interface.

Reuse is key because companies have invested over the past, billions in IT systems and they cannot simply throw those legacy systems away because a new technology is popping up ! For the sake of agility, integration is key : no need to rebuild from scratch what has been built before. Instead, reuse is accelerating time to market, development been focused on the new functionality/technology

Service integration saves time

Integrating a legacy system is a good example of how service integration is providing value and agility : legacy systems are delivering proven services and maintain a lot of key data. How to benefit from these assets through new channels like mobile, pads or simply in a web browser ? Should the legacy system been ported on a newer technology ? Porting is both expensive and slow... Building services out of the legacy (keeping in mind the proper granularity) and integrating those services with the new technology/channel is by far quicker and cheaper.

As a concrete example, a bank, in the beginning of the 90's, was owning a financial engine made of more than one million lines of Fortran 66 associated with a large proprietary database. This company was at that time introducing open systems to users which was giving  a much better user experience but lacking the powerful computations provided by the legacy system. The solution which was gradually introduced was to define from the legacy a set of high level abstract interfaces on top of which new applications were built on the open system platform. This was delivering the best of both worlds, user friendliness and advanced computation capacities
More recently, same applies for companies willing to introduce mobile services. All data and services are already there in legacy systems but unavailable to new channels. In insurance industry, car damage insurance benefits a lot from mobility : pictures can be taken on site by the user, garage, expert and fill the file which is maintained on the legacy. Building web services or using any kind of service integration technology on top of the legacy unlocks existing data and services. In that example a mobile app was build using services from the mainframe such as opening and validating a case, adding new information to a case, ... 

Service integration is not a technical matter

This shows clearly that service integration is not (not only) a technical matter. Service integration means that services are defined functionally. In those examples even if the starting point may have come from a technical integration, very soon in the project, the need of a proper definition of a consistent set of services was raised connecting with Enterprise Architecture.
Understanding up front what are the 50 key services an IT system should offer is helping to success in service integration.
Service integration can leverage Enterprise Architecture, implementing concretely services that architects have identified.

Of course one could argue, that leveraging on old technology is creating a huge maintenance problem : how to maintain legacy cobol systems ? and what if these systems are used also by newer applications making them impossible to decommission ? In fact, this argument is more related to life cycle management rather than service integration. If a legacy system is still using old technology it is not because of service integration !
Instead, service integration is providing flexibility isolating producers from consumers and allowing a decoupled life cycle along the value chain. 

vendredi 23 novembre 2012

MIS : profit or cost center ?


Three years ago, I posted a similar message on this blog. Looking at the different conversations around, I think it is still quite valid. To give it a larger audience, I am reposting it in English


The scope here is not industrial data processing which definitely is part of core business as a plant is. Some companies are therefore organizing IT in two separate groups to manage industrial data processing on the one side and business data processing on the other. The scope of this talk is only business data processing.

Business data processing is coming in two main flavours which look similar but differ quite significantly: cost saving IT and revenue generating IT

Cost saving IT improves existing processes

This IT belongs clearly to support functions and delivers support to bring processes cost down. Examples are accounting, invoicing, inventory management …

Revenue generating IT is core business

This IT is not only providing support but also enables new products and businesses. Therefore it is core. Thanks to its information system, a company can develop innovative products to differentiate itself from competitors. The IT system becomes as important as a plant or any other production means: it is delivering products to customers. In past years, this used to be the case only for few industries: capital markets where products are completely digitalized and managed through IT or mobile telephony where accounting is key to invoice innovative and competitive products.

The more digital we get, the more IT becomes a profit center

The world is changing… since I wrote the original post three years ago, our world has become even more digital. E-commerce is mainstream and obviously there, IT is a key differentiator. Regular products are now sold with embedded digital services: the UPSs of this world are not only delivering packets but also services. Their reputation can be seriously damaged if their IT is off. The same applies for car manufacturers, cars are sold with digital service like telediagnostic, remote assistance which are designed as part of the product.

Managing cost saving IT or revenue generating is totally different.

Of course, in the light of those two examples, one understands that these two different IT cannot be managed the same way. The first one is cost driven, servicing its customer for the minimal cost, potentially impacting quality. Minimizing the cost is often driving to outsourcing and off shoring in order to share cost. This is made possible because cost saving IT is optimizing existing processes, which are supposed to be well-known and documented. Therefore a close relationship between IT and users may not be perceived as a key element.
For revenue generating IT, the approach is completely different: it is not only cost driven. What matters is that the product which is embedding an IT service is delivered in due time, matching clients expectations and bringing added value. This is core business. Of course here as well, external providers could be used but not in a full outsourcing mode as companies need to control their core business. IT part of those products is built in a tight relationship with other parts of the product. Therefore, off shoring is difficult to apply there as it creates an increasing distance within the product team.

Do not get mixed up !

As presented here, those two flavours of business IT look the same but are fundamentally different, approaches and means are dissimilar. Not recognizing those differences creates "hybrid" strategies which may be quite hampering.
When a CIO running a revenue generating IT forgets about his specificities, trying to lower cost through outsourcing and off shoring, he is going on the wrong path. Quite soon, but unfortunately not immediately, outsourcing creates the distance mentioned earlier. Existing teams are not understanding the strategy anymore and get demotivated. IT is then no longer able to bring the value it used to bring to core business.
As a symmetrical case, thinking that one can transform a cost driven IT into a successful revenue generating IT is a mistake. A cost driven IT is very far from core business, has usually not a service oriented behavior and cannot easily cooperate and communicate with other departments to shape a new product. An amazing lot of articles and books dealing with this issue of filling the gap between IT and business are evidences that cost driven IT cannot move to core business overnight!
This is explaining some of the issues we face in IT departments when positioning is unclear, creating huge frustration both on the business and the IT side.
Since I wrote this, I have found a complementary view on the same topic

dimanche 18 novembre 2012

Trust is IT fuel

Every outage makes IT look bad !

Running an IT system is difficult: whenever an outage is popping up, IT department is looking bad in front of its customer, very often looking for the guilty one. This natural inclination to finger pointing has to be fought as much as possible: it is not improving the relationship with the customer and not improving the system either. No need to say that, from a management point of view, this is creating even more damages: by definition, people are making mistakes and finger pointing will encourage them to cover their position or even to hide facts.
Everyone has been confronted with this very annoying situation in which the whole system is down but every sub component is working fine! This is a typical situation where looking good as an individual is more important than solving the issue the company is facing. Therefore everyone is defending himself instead of collaborating for solving the issue.

Finger pointing does not help solving an issue !

This finger pointing behaviour is generating fear and angst which are blocking any improvement process. Who would be keen on changing something which may generate a problem? Even worse, very often people are hiding failures making it impossible to identify the root cause. I am sure this will ring a bell for many of us: how much time have we spent trying to understand implausible arguments that were trying to hide evidences?
Another negative impact of this kind of lack of trust is that IT tends to fix immediately the outage, overreacting to prove how good it is: “of course, we had an outage but it has been fixed immediately… !”
All these trends are tending to hide or at least not to analyse the root causes of the faced outage, therefore, no improvement is then possible and those outages will occur repeatedly.

Are we sure this will not happen again ?

There is only one question which really matters: "Are we sure this will not happen again?". If this question is enforced by management and users, the focus move dramatically to understanding the root causes and improving the system
IT chains are complex, made of hundreds of components chained one after the other, therefore an outage may come from a root cause which is quite far from the faulty component. First level analysis is definitely not sufficient. Encouraging a deep analysis is not only mandatory to improve the overall reliability but is also breaking the vicious circle of suspicious relationships leading to better efficiency.

jeudi 15 novembre 2012

Cloud, SOA, Distributed computing, RPC same potential pitfalls !

It is a while since client/server technology has been first introduced. Technologies have been improving with RPC, distributed computing, SOA, ESB, cloud, ... but the same doubts are always around the corner : this technology XY does not work, is not performing, ... even if we have with web2.0, daily, in front of us, a proof that distributed computing and cloud works.

What is the root cause of these concerns ? This is mainly coming from the lack of understanding of what is happening behind the scene. Even with primitive client/technology, developers could ignore what was happening behind their SQL statements or their stored procedures. Network latency, network stacks, marshalling/unmarshalling are more and more hidden from the developer. Therefore, today as in the past, granularity is not so much in focus. This leads to too small or too big services that are either not efficient or not responsive enough. Executing a simple addition through the network has never been and is still not a good idea neither is transferring 1GB of data to call a service.

These problems are also coming from a common pitfall : the lack of IT urbanization. Without a clear vision of the services to implement coming from a functional map of the essence of the information system, services tend to be multiple, redundant and, in one word,  inadequate. If such a map is not defined, it is then very difficult to have any control on the service granularity. As an example, some organizations are using service directories to try to organize and manage the profusion of services that have been created having in mind mainly a technical approach.

To ensure success, a list of services should be defined before hand matching design principles and non functional requirements (like granularity) so that the use of services is performing as expected. This approach is also preventing redundant services which are creating maintenance nightmares... as with more old fashioned technologies!


dimanche 11 novembre 2012

Stop building systems supposed to work!


IT systems are complex, made of many collaborative chains. Reliability is of course a key word but resilience should be also considered as a key aspect improving reliability. If potential failures are part of the initial design, resilience and therefore reliability will be greatly improved.
Very often IT systems are designed as if they were working perfectly. This may be true for each single system but as a matter of fact it appears often not to be true globally! Perfection is often out of reach and this input should be considered from the design phase
Systems that are not designed to behave nicely in case of failure create all sorts of issues on downstream systems.
Let's illustrate this by a very common pitfall: a data (file or unitary data) is missing, what should we do? Block the entire process without processing anything or just skip the missing data and process whatever you can, performing a partial rerun when the data is becoming available ?
Of course, the second option seems more sensible but unfortunately rarely implemented even manually through a process.
Just to highlight this point, a little anecdote: once, as an accounting system was computing million of transactions for end of month results, it got blocked because a currency value was undefined. Fortunately, as the process was considered sensitive for the company, a manual check was performed throughout the chain. This failure was detected during the night and the on call manager who had not the faintest idea of the possible value decided to set it to 1€. The process could start again, and the next morning accountants were able to manually fix the impacted transaction. Of course the run was not perfect but more than 99.99% of the goal was reached!
Even more common is the "cron" syndrome. Because open system developers have usually little experience in managing batches, they are not used to the capacities of enterprise wide schedulers. They are implementing batches with primitive tooling which ends up with very rigid chains which are not flexible enough to adjust should a problem arise.
As an example, very often batches are triggered at a given time and not upon a certain condition. Any delay in the upstream is then creating an issue on whole downstream chain. The same applies by the way with return codes which are not always fully implemented. Chaining jobs becomes then a little difficult without clearly knowing the exact status of the previous job.
Regarding batches, it is very common to see a black and white approach: either it runs or not. If not, nothing is produced, the error has to be fixed to rerun the complete batch. This of course does not fit for large batches that take more that minutes to complete. To increase the resilience of such batches, restart points have to be defined within the job logic so that the rerun is only performing the missing computation and not the complete job.
Data quality is not a common concept either: consolidation systems for example, by design, rely on the input coming from potentially hundreds of systems. Statistically, they cannot be right every day as most likely an input will be missing a day or another.
A way of managing such a situation is to implement fall back mechanism trying to estimate missing data, for example based on previous day input, and reflect this in a quality flag which shows how reliable the computed figure is.
In a nutshell, let´s move away from the optimistic approach and let´s build system that are ready to fail, having the needed fallback mechanisms implemented from the design phase!

jeudi 1 novembre 2012

I am back !

It's a long time since I have written something here!

As I have now many colleagues speaking english, I'll switch to something which will pass for English ...
I'll also try to post on a more regular basis (not so difficult) ...

See you soon