“If you save your breath I feel a man like you can manage it. And if you don’t manage it, you’ll die. Only slowly, very slowly old friend.”
I started my career working in the channel, in sales and management at mega-large industrial distributors. Later in my career I succeeding in creating global OEM and Reseller agreements that led to 10’s of millions of dollars in new business and dream exits for early stage technology businesses. This experience set a foundation in how I think about the channel and how I have been able to build the good, while managing the bad, and avoiding the ugly.
“There are two kinds of people in the world those with guns and those that dig. You dig?
Whether OEM, Distribution, VAR, or Representative Sales, Channels and Partners should be part of the total selling strategy. I have created channel relationships with some of the largest technology companies in the world, brand names like HP, IBM, BMC, Cisco, CA, Ericsson, BT, AT&T and NTT. The good is that the channel can be a huge contributor your business and provide scalable revenue growth.
Good OEM relationships are characterized by the OEM owning the end customer relationship, Cx, and 1st and 2nd level support. In OEM deals, typically your product becomes a component of a much larger solution. OEM relationships can be a great way to build technology while building revenue. OEM’s providing a funding source for your R&D. OEM’s are also a direct source to market, to gain insight into customer personas, and without the direct sales overhead expense.
Good Distribution/VAR relationships are typically not white label, like OEM, and require more pull through marketing to build channel demand. Examples of VAR/Distributors are companies like Avnet, Arrow, Gaybar, and Anixter. They expect you to drive market demand and the VAR’s to deliver your product by attaching their service or compliment products. A great VAR relationship can greatly reduce the cost of sales and provide access to global and vertical markets without building out local facilities. Global Distributors are now hybrids that provide VAR like services while maintaining their traditional value, maintaining inventory, and providing credit.
Good Selling Representative channels are typically successful when there are individual’s or groups of people who are highly connected with your target customer. I have been the most successful with these when opening international or geographic markets. The Rep is independent and maintains the selling relationship with the customer. The company supports the product and provides the terms of the sale and finance.
“..but you know the pity is when I’m paid, I always follow my job through. You know that.”
The bad part about building a channel strategy is getting started and then executing. Building the right channel model, legal contracts, selling tools/CRM and branded marketing collateral is time consuming and can be costly. Pricing models, localization, product support, sales overlays, and supporting assigned teams all require resources.
There are several bad challenges that can delay or derail a channel plan. Including:
Building enough interest in the market so that there is demand.
Developing the right relationships that will deliver results is often difficult to predict.
Going global through channels before penetrating the domestic market can dilute cash resources.
Understanding how best to channel the product requires experience. Make sure that the effort doesn’t turn ugly.
“If you want to shoot, shoot, don’t talk…
The ugly is when the channel strategy goes wrong. The OEM’s are not interested. The VAR’s and Reps commit to penetration and account exposure but are not delivering. Tools, Marketing, and Ops expenses have been spent but there is little revenue to show for the time and cost.
I have been in these situations as part of my consulting practice and was able to move from ugly back to good by refocusing. What is typically wrong is that the channel partners picked the company vs. the company picking the channel partners. The partners were either easy to access by the company (came to them) or friends and associates that the leadership team supported. The most important thing to do to stay out of the ugly is to understand your market and focus your efforts on the channel partners (OEM’s Resellers, Reps) that will deliver value. Measure the effort, incentivize the results, and support their success. All huge factors in a good channel strategy.
So a sales channel can be good (it can be great), it can be bad (hard to do) or downright ugly. The lessons I have learned in building global channels are that like any approach there needs to be a solid plan, an understanding of the value of the product to the channel, measurements, structure to support, and the committed financial resources necessary to be successful.
In 2004, I joined a startup in San Francisco in the Microsoft Visio ToolSet software business. The company had licensed visualization technology and were building a toolkit to help engineers document data center racks. This company Visual Network Design (Rackwise) had about six people and a few thousand dollars in total revenue. What I knew at the time was the business model they had didn’t work but what I didn’t know was that this company would be one of the early innovators in Data Center Infrastructure Management Software (DCIM).
My early days in DCIM and the Network Management Software space led to me launching Nlyte Software into the US market, as President, and then building my consulting practice and assisting in growing the software business’ of Schneider, Geist, No Limits, InControl, Optimum Path, Asset-Point, Modius, RIT Tech and Track-It. I have been on the product development and marketing side, closed early clients and partners, worked with VC’s, advised analyst and written and executed the complete corporate plan for these and other companies. The evolution of DCIM, the strategic interest and new emerging markets, have led DCIM to a crossroads.
Evolution of DCIM
What is DCIM? My guess that if you are reading this you probably already have a pretty good idea, but fundamentally DCIM is the management of data center infrastructure in regards to Cooling, Space, Capacity, and Power. IT & Network Assets (ITAM), Service (ITSM), Uptime (NMS) are all associated components of the physical management of the data center and DCIM vendors have features to support parts of this as well.
Facilities vs. IT
Who in the organization should own DCIM and why is that important? It’s really interesting as it is different ownership at different companies. I have seen IT own the software, facilities own the space, and HR own the budget. It can make for a difficult and long sales cycle when HR owns the Data Center budget. I believe right now, companies that own data centers have determined that it is core to their business. If not, then they have outsourced or will soon move to an outsource model. Therefore, if they own their own data center the DCIM budget is now strategic. Strategic funding comes from the Executive level and so CEO’s, CIO’s, and CTO’s are now directing DCIM buying and architecture decisions. This is important as selling at this level can grow into a much bigger and strategic sale.
Back in the early days of DCIM I worked closely with (my mate) Robert Neave, CTO and co-founder of GDCM (Nlyte Software). Rob had managed a large data center for UPS in the UK so he had a deep understanding of what was needed and knew that DCIM software that existed in the market at that time had some serious deficiencies. Rob was a visionary and I had a great time helping him bring his vision to market. Rob and I both realized that DCIM would touch everything from IT to Facilities to Service Desk. I think now even more.
Outsource Impacts Evolution
Cloud and Co-lo providers had a serious impact on the DCIM market as Enterprises shifted to an outsource vs. insource model. This seriously impacted the growth of DCIM and did some serious damage to the appetite for investment in DCIM technology and killed off a few companies that were early entries. What is happening now is interesting in that the large incumbents have de-emphasized their DCIM innovation, focusing on their traditional business, while the smaller software-only players have focused their innovation around markets that are attracting new funding. IoT is one of those markets. There were always IoT components in data centers, sensors for temperature, humidity, access, etc. So some DCIM vendors have now built interfaces to support IoT data. It’s not a far reach to now be able to manage those arrays in the context of larger upstream systems.
An IoT assisted data center workflow example could be: run this Pod (area of compute), turn on thermal imaging sensors, predict load impacts, start/stop economizer, reduce/optimize load when temperature reaches a point where set point values need to be adjusted.
Internet of Things
IoT offers great advancement in tuning, measuring and managing but there are large challenges with areas around protocol compatibility and security. DCIM has, for the most part, already solved those problems and the platforms modeling and predictive capabilities should be leveraged both inside the data center and now outside the data center.
The evolution of DCIM, the Executive level interest and the new emerging IoT market has led DCIM to a Crossroads. It will be interesting to see which companies have the vision and capacity to continue to evolve in and out of the traditional data center. The ones that do could be the ones that continue their journey beyond today’s crossroad.
Spending the last twenty month working with a Software Technology innovator in the Heavy Civil Construction market has been very exciting and reminds me a lot of 2002. I was aware of the planning side of the market as I had worked with CAD supplier Autodesk in the past but I didn’t realize the tech effort now being shown in bringing large civil projects to completion. When you think about the billions in global expenditures on Roads, Bridges, Dams, Tunnels, Railways and Airports it makes complete sense. It’s really a wonder that I wasn’t aware that this market represents a new frontier for Technology Innovation. Drones, IoT, Mobility, Automation, Data Mining, Fleet Management, Mapping, Collaboration, Payment Systems, Maintenance, Scheduling, Document Management and 3D/4D Modeling are all innovating rapidly to grow in this marketplace.
The scope of engagement for me personally has been quite broad. My work with Pavia Systems has allowed me the ability to engage clients at large Department of Transportations, Consulting Engineering Firms and Technology Vendors. What I have learned is the the market characteristics of this space is very similar to the early days of Network Management Software. Specifically that there are lots of fragmented, point solutions, that lack integration capability and the buying requirements often lack the information necessary to build to a long term scalable architecture. Further, there are many custom and home grown solutions and Commercial Off the Shelf (COTS) solutions are still evolving. Users of the systems typically don’t have the technology background, are adverse to change, and require products that are easy to use, adapt well to their current processes and support self service. Small productivity gains and risk avoidance are driving many buying decisions with true technology architecture decisions taking a back seat to single point or custom requirements. As products/services and buying sophistication advance, this will transition to really consolidating all of the information and greater insights to information. Things like project modeling, intelligent material selection, scheduling, costs management, risk avoidance with advance on a similar path to fault, capacity, availability, performance and security did in the Network Management Space.
Intelligence, data mining, event notification, and single pane of glass.
Pavia Systems is one of the vendors that has a clear vision on collecting information once, integrating disparate systems, and delivering a single pane of glass on the complete life cycle of Heavy Construction Projects. Pavia’s HeadLight Project Intelligence Platform builds off of a mobile collection interface that clicks and swipes, vs. types and writes, on-daily site inspection data that is then stored, shared, and indexed on their cloud based platform. In my research, they are the only vendor that gathers this data via rich media (video, images, and mapping) and intelligently indexes. This ability to collect and manage large files builds a YouTube like library of what happened, when and by whom. Integrating this across the stack of payments, drawings, legal, drone video, events provides the closet match to true project intelligence that existing in the market today.
New Frontier for growth
With a market TAM in the billions, opportunities in Heavy Civil Construction software, services, and tools is large and growing. The market is really in the early stages of development with a few large companies focus on just a few niche areas and there lacks a true architecture to support an end-to-end process. Building today and tomorrow’s Roads, Bridges and Railways is going to require higher integration to deliver projects faster, with higher reliability, and lower cost much like the Network Management Market did in building the information super highway in the early 2000’s.
To learn more about Pinpoint Worldwide and how we have solved company growth problems, helped penetrate new markets and launched innovative technology to a global marketplace, please visit http://www.pinpointworldwide.com or contact me at email@example.com
In my twenty-year career I have had the opportunity to build and head sales in early stage, mid stage, and large organizations leading to $100’s of millions in global sales. Each companies stage presents its own set of challenges and opportunities. This blog will address the key elements in creating a high functioning sales engine in the early stage venture. In getting started there are three things that have to be done in the early stage that if not done correctly could break the sales car before leaving the garage. In early stage is it critical that Sales Leadership 1. Personally engage prospects and the market, 2. Build-out a sales engine (CRM, Process & Measurements), and 3. Document and share the sales playbook.
The New Venture
The profile of the early stage venture is typically a revenue starting point of under $1 million, with limited sales resources (people), little or no channel, low product/service market awareness, with limited marketing budget, lightly seeded or boot strapped, limited engineering resources, and no or just a few clients. Really, who would want to start a sales organization with this? Ah, but in the challenge lies the accomplishment. The new venture is a stage that is really exciting and really fun. The company has a newness and is pressed to move forward at a high rate of speed. There is little bureaucracy or past baggage and sales is truly the engine that is pushing the company race car. Fun…fun.. fun. I have always felt that my actions had major impact on winning deals and it was never truer than in the new venture. Winning here is imperative. There are not many second chances.
Personally engage prospects – as many as humanly possible
This seems like a “no-brainer” but the funny thing is I have been in organizations where the executive team only worked with a few “key” prospects and didn’t really have a feel for the total market place. It can’t be stressed enough how important it is to understand your customer. Early communication leads to the right play book and early selling opportunities. I have several stories of how and why this works but here is one.
I had the opportunity to launch an early stage UK software company into the US market. The leadership team had spent a great deal of time with a few key local clients but they just didn’t have a feel for why they were not selling more in the US. I spent the first month speaking with global prospects, about 30 of them, and gathered intel on the perception of the company and how the product matched their requirements. What I found out was that their largest competitor had done their homework on the company. They knew where the product and company had holes and were broadcasting to the market. Armed with this information I changed our global approach, the sales playbook, and implemented how we attacked the US.
Early conversations with prospects allows sales leadership to build a working playbook that can be templated for the sales organization. This interaction leads to early company sales even if the product is not quite at the commercial stage. Prospects appreciate a consultative approach and often, since the product is still in development, features can be tailored to fit a market gap and take advantage of an incumbants weakness.
Systems, Processes, & Measurements
Every successful sales organization, regardless of size or stage, has to incorporate Systems, Sales Processes and Measurements. Typically in the early stage, there hasn’t been a lot of this foundation laid so this is an opportunity to create a modern, “world-class”, selling engine.
The Customer Relationship Management platform is a critical piece in the sales engine. I have worked with just about every CRM platform, including home grown, and lean in the direction of Salesforce.com. With Salesforce, I have been able to build-up from a blank shell the necessary infrastructure to manage sales from Early stage-to-Late, from US-to-Global, and from Direct-to-Channel. It also provides the foundation necessary to build out a manageable sales process, task based and stage based tracking, and KPI measurements. It doesn’t have to be Salesforce, but the foundation of a measurable process, progress management, reliable forecasting, client/sales engagement history, rep/channel accountability are the injectors, pistons and transmission of a “world-class” sales engine.
Document the Playbook
Now that you have spoken to the market and the tools are being implemented, it is time to build the sales playbook. The sales playbook’s objective is to get those great plays that you (sales leadership) knows will work as a resource for the team. At a minimum, the sales playbook outlines what we are selling, to whom (personas), our selling process, our strengths/risks, pricing/packaging, handling objections, our selling collateral and our competition. The playbook should be written and communicated so that everyone in the organization can understand it. It is very important for the team (especially in early stage) to be on-board with the approach. When resources are limited, everyone is on the sales team.
In early stage ventures it is important to build a solid foundation for growth. It offers the unique opportunity to do-it-right the first time. So engage the marketplace. Talk to customers, analysts, domain experts. Get that first hand intel to lead the selection of the right tools and build an executable sales plan for success. Do it right. Build the right engine. Hit the throttle and enjoy the ride!
The Pinpoint Worldwide Take is my review of current technology in the IT software, DCIM, and Infrastructure Management space. This current take is my review of an independent software and services vendor that has a unique application that unify’s service desk with DCIM by leveraging a mobile UI that records, updates, prints, bar codes and audits while the technician is performing the work. This take is about TRACKIT Solutions, specifically the TRACKIT Moble App.
See Video on Pinpoint TV or read transcript below.
Hi this is Daniel Tautges with Pinpoint Worldwide. Thank you for joining me today. I’m excited to talk to you about a company that I’ve been working with the last few months in the software technology space. They’ve got a very cool product not a lot of people know about. In this video I’ll talk a little about what the product is and how it fits in the marketplace. The company is TRACKIT Solutions and TRACKIT has been around for about eight years but their primary business has been in audit services. They have worked with some of the largest banks and the largest companies in the world doing audit services.
As part of their audit business they developed a software application that runs on a ruggedized mobile tablet, and this is the thing that I am really excited about, this ruggedized mobile tablet connects back into a back-end database. The database has API connectors into service desk applications like remedy BMC, ServiceNow, HP and they also have DCIM (Data Center Infrastructure Management) connectors to applications like nlyte, Aperture and Schneider so they are really a middleware solution, that is mobile, that allows you to download information from both those systems.
Imagine a workflow ticket coming from a Service Desk to connected to the DCIM application. TRACKIT combines those in a mobile tablet that then allows you to do all of your work within the cabinet space. So when moving devices around or changing network connections, I can literally look at the DCIM rack elevation drawing. I can look at the work order and I can move I can move devices within the rack from RU position one to position three, as an example. I can do that in a very visual way and I can also then validate that the DCIM application is correct visually, physically, and that the service desk asset management application is also correct. As I am doing my work I am also doing an inventory and validation of that inventory.
What I found in my work as consultant in the space is that lots of clients that I talked to don’t have a very good, high level, of confidence that their data center infrastructure management suite is a hundred percent correct. This allows you, as your doing the work within a cabinet, to validate all those things. Do the work and actually make the changes and then update those systems automatically.
The best part about it is it works in online or offline mode so I could literally be offline in a dark data center do the work, then go back and plug in and TRACKIT feeds all these systems with updates. The thing I also really like about it is that it is both ruggedized and highly secure so if somebody sticks it in their backpack and takes it home it’s not going to do them any good. TRACKIT is a purpose-built application that runs on the windows platform. The application feeds the TRACKIT database and then instantiates the other two systems.
I have talked to a couple big clients that actually use TRACKIT to update five, six or seven different systems, inclusive of their accounting packages and some of their compliance stuff. I see a lot of products in the datacenter infrastructure management and the Service Desk space but I don’t see a lot of applications that join those together in a really meaningful way.
When I was doing some research with some of their clients, I also found out that they are telling me that they get a 10X productivity gains when they use the product. Making moves, adds, and changes requires less people and you can do it faster so they get more done and a huge productivity gain. Trackit is not a super expensive product to go out and buy so I suggest you take a look at it you haven’t already. Give it a go. The URL is http:www.trackit-solutions.com.
I think it is a very unique and interesting product and you guys could really get a lot of benefit out of it. If I can be of any help please reach out to me personally at Daniel@pinpointworldwide.com. Thanks for tuning in!
Ever heard the term “garbage in equals garbage out”? Never has this been so apparent when advanced and expensive Enterprise applications are tasked to forecast, react, account, display, triageor audit highly sensitive and mission critical information. In fact, yikes, this author is aware of several million dollar Enterprise applications that never made their way from the shelf as it was just too hard to keep the data that feeds them current.
Insuring data accuracy and reliability is a huge problem. Luckily, one company (Trackit-solutions) has a unique mobile solution and process that is in production at todays top data center operators.
Please enjoy my Skype interview with Trackit-solutions founder and CEO Steve Beber and learn how Steve’s company is bringing his solution to the aid of global DCIM, ITAM, Fault Management and Service Desk implementors and operators.
Data is the Foundation for Million Dollar Applications – Pinpoint’s Point-to-Point Exclusive Interview with Steve Beber
Data center auditing and data integrity are problems for a lot of companies. They don’t know where their assets are, they don’t know exactly what assets they’ve got, and they don’t know if they’ve been depreciated. Daniel Tautges talks to Steve Beber of Trackit about the problems data centers face with data auditing and data integrity and how they can be resolved.
Steve Beber is the CEO and founder of Trackit Solutions. Trackit is one of the leaders in this industry; they’ve audited more than one million assets worldwide. Before launching Trackit Steve held a number of senior management positions, most recently as VP of Professional Services for EMEA at Emerson Network Power (Aperture).
Daniel Tautges is the president and founder at Pinpoint Worldwide, a business acceleration company. Formerly he was the president of Nlyte Software, vice president of Rackwise, and vice president at Micromuse.
Why is data collection of interest to our audience?
Obviously if you’re buying assets it’s important to understand what they are and where they’re located. From a data center point of view, from an asset management point of view, it’s understanding your capacity. You grow within an environment, and if you don’t know what you’ve got and where it’s located you can’t do that accurately. It’s all about understanding where those assets are located. We do all IT assets with a focus on the data center, including power and network connections.
Is this only for big data centers? Small data centers? Regional distributors?
Any size. We’ve been doing this now since 2008, and we’ve done a range of sizes from small comm rooms to global banks. The biggest was over a quarter of a million devices for one of the world’s leading banks. We’ve worked around the globe in every size data center. It’s as important to a small business as it is to a large business to understand what assets are owned.
Are there trends in the marketplace with specific verticals like telecom or financial services?
In the early stages we saw a lot of banks and a lot of financial service businesses that were looking for audits to understand what they had and where it was located. I think some of that was driven by equipment that was put in very quickly, not necessarily in a structured and recorded manner. We’ve now seen the same trend over the past year on the telecom side due to the fact that the telecom industry is seeing a huge boom. A lot of communication rooms are now transformed into data centers. We trace what they’ve got and where it’s located so that they have an accurate record for consolidation and growth as well as for the future.
Why is it so hard to capture data? It sounds like a difficult problem for a lot of different industries.
It’s more a need for the right tool for the right job, the right people, and the right processes. I believe the reason we’re so good at what we do is because we’ve been doing it for eight years. We’ve matured over those eight years and we’ve refined our processes. We’ve got great teams of people who are very experienced with doing audits and we’ve got a great product.
What system does the data that you collect feed back into and can you interface with upstream and downstream systems that would utilize your data?
Sure. The Trackit Audit repository can pull or push information to service desks, CMDB, ITAM, and DCIM services. We can map existing data sources into our product, show them as rack elevations within our product, and take that onto the floor.
So if I’ve got an existing workflow system–for example, I’m a ServiceNow client or I’m a Remedy client or I’m an HP Service Desk or one of those asset management or service desk systems, how do I leverage Trackit and what value do you provide me?
Some of the failings in those products is the fact that they don’t have the visual element and they don’t have the mobility to take the data down to the floor. Before we became tablet driven, everything was about reading lines of data and not actually looking at elevations. No visual element, because the screens didn’t have the capacity to see rack elevations. With our new technology, there are rack elevations on the actual tablet, so you have the ability to view the rack against the racks that you’re actually looking at. One of the big advantages is you look at a rack and you stand side by side with that data, you can see instantly if there’s a problem with what you’re looking at, so the mobile view of rack elevations provides instant configuration feedback. It makes highlighting problems so much simpler.
If I’m a data center and if I have cables connected to the wrong port or if I have devices that are there but not accounted for, what kind of problems does that create?
From a capacity planning point of view, from the management of day-to-day operations point of view, it can create huge problems. The worst case scenario is if you have an outage in a data center and you don’t know which devices have gone down, you don’t know where they’re located, you don’t know which applications are running and what servers. How do you go about finding those servers? You need to find them quickly. IT is at the center of a business and if a device goes down it can cost a lot of money. You want to find that device and you want to work out what the issue is. If you’re using outdated Excel spread sheets, it’s a lot like looking for a needle in a haystack.
Are you hearing anything from your clients about compliance? I would think compliance would be a big issue, specifically for the financial services business, but for any business that depreciates their assets.
I think more and more there’s a push on compliance, with businesses taking more ownership for the full lifecycle of the equipment they have. How can you be sure that you’re managing your data security properly if you don’t actually know where your assets are located? A lot of people use spreadsheets for data that they can’t be sure is accurate. I see a lot of legislation coming in the future putting more ownership on data center owners and businesses in general to have accurate records.
Are you seeing anything around security, such as not knowing where assets are physically located or thinking that they’ve been decommissioned when they’re still working?
Hard drives these days can contain confidential data and if a device is decommissioned and you don’t have accurate tracking of where that device went and how it was disposed of, all of a sudden that data crops up. This is where you hear bad stories in the press about data that’s been found in a dump somewhere, and it turns out to be bank records or account details. We track assets throughout their lifecycle and then at the end of their lives have a document attached to show that they’ve been properly disposed of.
From a Trackit perspective, give me an idea of the envelope of the solution, what you guys do from end to end, how you differentiate yourselves.
A lot of solutions on the market are focused on the high-end features vs. low-level raw data collection–things like worrying about graphical dashboards instead of focusing on asset management. If you build a house you wouldn’t build the house from the roof down. You have to start with the core, asset management. You have to start with good quality data and you have to have a mechanism to maintain good quality data. Once you have that, then you can layer on other things.
We like to take people on a journey, and that journey begins with a data collection workshop that introduces our approach, typically a one-day workshop. In the workshop we look at what the customer is doing now for asset management. We present some best practices and then we come to a decision with them about what they want to achieve, how they can maintain that, and the standards that can be adopted to maintain the system accurately.
We want to find a way for the customer to maintain good asset quality first, and once they’ve done that for a set period of time we look at whatever layers can be added on, such as intelligence sources that can be polled internally or sniffers and devices on the network that can collect information. We also look at power audits and network connection audits that give even more granular information and permit more complex reports. Then we can build reports and dashboards that help the customer get the benefits of that good quality data. It’s very much a phased approach. We like to say we take customers on a journey. We start at the very beginning, set standards, understand what it takes to get and maintain good quality data, and then take that through a full lifecycle so the customer gets value out of it very quickly.
Can you give me a use case? Where was the client before Trackit and after Trackit? What did the journey result in for that client?
Over the last eight years we’ve done some big global projects, with clients including the world’s leading banks, retailers, and telecoms. We had a customer recently that for six months had been trying to get data to deploy on a site. It was about 500 racks, around 11,000 devices. The issue was that the in-house teams were trying to collect data as well as doing their day jobs. It wasn’t happening. We went in—in a two-week period, we collected 11,500 devices. A team of five, which is one administrator and four auditors, will collect in excess of 1000 devices per day. Every device we hit is QAed by the audit administrator. Every time a device is audited it’s timestamped and user stamped. We have a full tracking history. All the data is in a cloud environment that we hold for customers, and it’s visible at the end of every working day. There’s no waiting around, there’s no time lapse, it’s instantly available, instantly viewable, and that data can then be exported or it can be maintained and managed in the cloud. It provides real value for money straight away.
During these operations they’re still doing move, adds, and changes?
Typically, what we do is put a change process in place so that the customer is recording their changes while we conduct an audit. We don’t want the changes to affect anything, so we like to make sure the data’s contained and not causing a problem.
So can you do an audit anywhere in the world?
There’s no limit. We’ve completed audits in Australia, Brazil, China, Singapore, India, and all over the US. We deploy teams wherever there’s a requirement. Very small sites to very large. It’s critical for all sites to know what’s there and what equipment they have.
What about pricing? How do you price the product?
The auditing service is priced per device cost, so typically what we do is go out to a customer and conduct a pre-audit survey. We can deliver a project plan to the customer with timing, resource structure, and associated costs. We offer several different flavors of solutions, and we sell bundled kits. We’ve got some of the biggest banks around the world using those for their asset management in conjunction with their DCIM tools. Then we have our Enterprise solution, which can be a fully housed cloud service that we manage. You can access it via the web, or you can have it hosted internally. Different flavors. One size doesn’t fit all. We have a completely different approach for banks, for telecoms, for small telecoms, for daily trading rooms. We have different technologies, different sets of equipment, some of which are very bespoke. It really is about just what fits the customer.
How would someone get more information about Trackit?
Data asset management and good quality data are important. This is a solution that’s been grown and developed over the last eight years, it’s very mature, it’s very tried and tested, we have very good case studies and very good customer stories that we can share.
Auditing becomes more critical all the time. It’s always been important for a business: it prevents fraud, provides trustworthy financial reporting, and helps an organization pursue its objectives. Government regulations makes auditing even more important. The consequences of sloppy business practices can be truly painful, and good auditing ensures things stay on track.
In the recording below Tautges talks to Bruce Frank about his data center audit experiences and how he was able to make audits less costly, faster, easier and more accurate.
Citi’s Vice President of Global Technical Operations, Bruce Frank, is a twenty year veteran of technical operations. He was formerly a Director for Dendrite International and EDS.
Daniel Tautges is the President and Founder of Pinpoint Worldwide, a business acceleration company. Formerly he was the President of nlyte Software and Vice President of Visual Network Design (Rackwise), Micromuse (now IBM), and Lucent Technologies.
3 Ways to Reduce the Costs of Data Center Audits
What is the Citi data center estate look like today?
Currently we have 14 mobile strategic centers spread out across the four regions: North America, Latin America, Asia, and Europe/Middle East/Africa. Outside the strategic data centers which are managing our critical business applications we have 285 tech rooms and satellite data centers. Some of these satellite locations house between 5,000 and 10,000 servers. We have 8,000 branches that we manage small pieces of infrastructure for and 76,000 physical servers. At one point we were closer to 100,000 servers, but now we’ve scaled down from the physical server perspective. We have another 125,000 to 150,000 devices that are under management within the data center.
From the standpoint of auditing, what do you consider best practices for an audit and why is it so important to Citi?
We’re under scrutiny all the time, not only internally but externally, especially on the investment banking side. There’s the FCC and Sarbanes-Oxley. I’m mainly dealing with data quality. Probably ten times a year we have to make sure that information in the database represents a physical device. There are a lot of fines that could be assessed just based on the way we have our infrastructure set up. We’re involved in exercises; we probably have 20,000 or 30,000 pieces of equipment across the globe and we may need to know what applications are on the equipment and what the equipment connects to. Based on best practices, we strive for 95% data accuracy, what’s in the system vs. what’s on the rack. Once a year we officially reconcile, which involves a manager signing off that everything has been checked. This involves a walkthrough of every single data center from top to bottom and ensures that the device isn’t just in the rack but that it’s also the right model and the barcode is correct. We’ve had stuff that was decommissioned five years ago but was still in the rack, and we’re paying for the maintenance and warranty. We depreciate our equipment for three or four years, and without a proper inventory it becomes very difficult to manage.
Last year a government agency came in for business-critical information. They couldn’t find the device with the data and they had to do a walkthrough of every single device in the data center. After about two weeks they found the device, wrong name on it, wrong label, but they found it and the data was there. The way they found it was by serial number and IP address.
What made you successful doing audits while reducing costs?
Data quality is my number one goal. For the last 12 months my team and I have been working on data quality. We’ve been scrubbing 250,000 pieces of equipment to ensure that the quality is there, that the make is right, the model is right, the platform and connectivity. One of the things we do goes back to reconcilement. We brought in Trackit. The goal is to be able to update information while you’re at the rack. You don’t have to go back and forth. You have the tablet right in front of you, you’re executing right there.
All you do is take it to your desktop and use the connector and it’s basically sucking in the data, consuming the data. The data’s been entered and verified. Now we have a verifiable source, and when you go to the auditor you can see it’s been verified. We have a pretty accurate inventory at this point.
In the past when we were doing audits it would take maybe a half an hour to do a rack. You need to verify the model and barcode. We probably have 7000 or 8000 pieces of equipment in each datacenter, so the amount of time to verify all of this is significant. With a scanner you can bring that time down to one minute.
The first year it may take a little longer. In the past it was taking us about three or four months. Now we’re going to get it done in a month, and the next year in three weeks. Everything is barcoded, and we have an accurate inventory. At the end of the year we should be able to open up each cabinet and in two weeks we should have our inventory done.
So when you were putting together your requirements for a tool, you wanted something that could be integrated to backend systems or DCIM platforms with connectors. Offline mode was very important because of your lack of connectivity. And you wanted something easy to use that didn’t require a lot of training. Were there other requirements you had in mind when you chose Trackit as your tool?
That’s probably a good summary of the rationale for the choice. A big part was the integration, the flexibility of the tool, or the ability of the tool to talk to these other APIs. The integration with Aperture was major. The ability to pull down data and see it visually was important too. Obviously Citi has a large footprint, and new tools don’t always scale, and we end up doing a lot of work ourselves. With Trackit, 80% of the installation went off without a hitch. Overall I think we’re in good shape now.
Maybe we could talk about where you were before and where you are now.
Yes, well a big one was the audit. Each audit used to require between five and seven full time employees in the front in the data center and five more employees in back. Somebody has to get the information uploaded. At least ten people worked on an audit, and the audit took three months. Now an audit requires two employees and takes less than a month. We tried working with RFID. It worked, but it was very cumbersome, very difficult to manage. You can take the Trackit tablet, hand it over to the next guy, and he can see just looking at the screen what he needs to do next. You don’t have to fiddle faddle through pages on a clipboard. We had multiple data entry points, but with Trackit you have this tool sitting in front of a server, you’re verifying and updating it, you’re syncing your pad, and you’re ready to go with no human interaction necessary. It’s a somewhat automated process. Now we have increased accuracy for DCIM and ETM. We still have challenges. When moving data, we need to make sure that the data is properly represented. For audits we need to make sure the data is where it says it is. Trackit supports the end-to-end lifecycle we want on our equipment.
So audits need to be done to ensure accuracy for DCIM planning and workflow, warranty and maintenance spend, software licensing costs, ledger depreciation, and ETM data theft. You said Trackit reduces audit costs by as much as 80%, which is significant. Can we get some idea what the future requirements are?
One is integration with the CA tool so we can track CA development. That’s a major one. We’re moving pretty well on that piece. We’ve already told the CA organization we’ve made an investment with Trackit for the next three to five years, probably longer, so we want to move forward with the investment we made with this tool. We want to take the tool and make it do more for us than we’re doing right now. The other big piece here is the ability to use the Trackit application to do one of two things. The first thing is building out a data center where we don’t have a full visualization. We just have big rooms there. It would be nice to say, “We need new cabinets,” and with the Trackit tool drag and drop cabinets, sync it up, and—boom—the new cabinets are on the floor there. It would give us a way to get information into our drawings.
The second thing we’d like to do is leverage the Trackit tool to supplement our installation process. Something we do now is hardware validation. A piece of equipment arrives at our loading dock, it needs to be scanned, it’s scanned with a barcode number, we need to go back to Aperture and put in a barcode number, put in the serial number, the model and make. We have most of that information already because we’ve already created a request. So now there’s a guy on the dock trying to figure out where this equipment comes from and what the purchase order number is. We want to leverage the Trackit application and basically download the information to a form on Trackit where it has that synchronization opportunity. We want to take that information out of the DCIM application that we’re using at this time and drop it in the Trackit application. As a delivery comes through the door, the scanner pulls in the serial number.
We use Trackit to produce barcodes, and we can get that number on the box as it comes through the door.We’re validating and putting quality into every step. Trackit gives us the ability to tack critical assets across our global estate from the cradle to grave. .
Get more information on how Trackit-Solutions can help you: