This white paper describes the solution to take KPIT’s legacy Diagnostic and Connectivity Desktop Applications Platform to cloud ensuring uninterrupted, global, fully managed service in a cost-effective manner using container based virtualization.
monolithic, scalable, cost-effective, optimal, container, orchestration
KPIT is a leading Diagnostic and Connectivity Platform (KDCP), as it stands today, is a monolithic application as it was originally designed to be so. KDCP was a desktop application and was designed to cater one diagnostic session at a time. As such, the entire software needed to be installed on the PC connected with the Engine Control Unit (ECU) to be diagnosed through a Vehicle Communication Interface (VCI).
Considering the advancements in ECUs and global nature of the businesses, there was an opportunity to make KDCP solution globally available and highly scalable. This cloud solution is a fundamental change in the way the new age technology is used while adhering to the Automotive diagnostics ISO standards.
Major technical challenge was that of time to start a service instance and underutilized infrastructure. If the stack is moved to cloud as it is, it would need one VM per instance of stack or diagnostic session resulting into hundreds of servers running at a time and ultimately huge amount of unused server capacity. Obviously, this wasn’t commercially viable.
KDCP is one of the KPIT’s prime offerings to the automobile industry, which is also the company’s major focus area. It all started few years back with a serious thought leadership from the stake holders, core team of automotive engineers and SMEs. Early penetration in the automobile diagnostics market space helped acquire some early wins and soon a diagnostics product was born. Over a period, KDCP product matured into a platform with broad range of end-to-end service offerings around automobile diagnostics and connectivity. With acquisition of In2Soft, the portfolio grew even stronger with addition of tools around ODX data authoring. Now in the era of cloud based solution and SaaS models, it was a natural progression to make KDCP really scale up to the market demands.
Monolithic Applications
Scaling a monolithic application is not trivial, especially if it has grown over the years. First and obvious option was to re-factor the entire legacy stack and (almost) re-write it into a new (micro)service based architecture. But this is easier said than done. Typically for products like this which have grown over years, there’s lot of application knowledge that people have in their heads and people move on! Hence a lot of effort and time is required to do so. For KDCP, refactoring was going to be the need of time in near future, but until then there was a need of intermediate stop-gap type solution that would deliver the required scalability and infrastructure optimization without having to refactor the entire platform which itself was stable and well tested.
Containerization Approach
A container based solution appropriately handles this scenario. Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. This way, the entire stack runs inside a container as if it is running on a desktop or a virtual machine, and several such stacks (or containers) run on the same host machine to fully utilize the machine resources like memory, CPU and disk ensuring optimum usage of infrastructure.
Solution Overview
Figure 1: Logical block diagram
At a 10000 feet level, the overall solution provides a managed pool of containers which hosts “ready to use” containers, each container running the entire application stack. Each container is created with a fraction of host server memory and optionally CPU. The solution also provides a background Monitoring component that keeps a check on the pool utilization. Pool parameters like capacity and minimum available containers etc. can be configured by the administrators and the monitor cleans up or replenishes the capacity based on these configurations using the Container Controller component. The Container Controller component uses container specific SDK and infrastructure virtualization SDK to automatically and programmatically spawn the required infrastructure i.e. virtual machines as well as the containers as needed. The updated pool status is constantly published to a service registry. A Broker, which is the external facing component of this solution relies on this status to ear-mark and allocate a container instance to a diagnostic session. All these components make use of a light-weight messaging system to interact with each other in a decoupled fashion.
User Experience
The end user runs a utility that requests a new diagnostic session to the broker as HTTP request over internet. Broker then allocates an available container to this request. The client utility then establishes connectivity with this allocated container and performs the business function. When the user terminates the session, the container marks itself to be recycled. The Monitor service then extracts the vital information from this session and then cleanses the container marking it available for use back in the pool.
Figure 2: Activity diagram
Merits
This solution proposes use of containers. Container based virtualization is gaining immense popularity because of the obvious benefits in terms of optimizing infrastructure usage, easy orchestration, automation and response time in terms of starting up of the container.
Specifically, consider a virtual machine based environment on a public or private cloud. For a global use case like this, thousands of virtual machines will need to be spawned and running concurrently for a short amount of time. Launching a Linux virtual machine takes about a minute and another minute or so to be ready to serve a client request. This is huge for internet based service. Additionally, the spare virtual machine capacity will go unused, which multiplied by number of machines, is again a big concern. Comparing this with a container based virtualization where multiple containers with fraction of resources could be launched inside a virtual machine and would be ready to serve in matter of few seconds.
Additionally, this solution is elastically scalable (i.e. scale as you need) ensuring savings on infrastructure costs and uninterrupted service.
Figure 3: Messaging for loose coupling
Author thanks Mr. Shirish Patwardhan, CTO and Founder, KPIT Technologies Ltd. And Mr. Anup Sable, CTO, KPIT Technologies Ltd. and others in CTO organization for providing the opportunity to address the problem and for ongoing reviews and guidance.
Author also thanks Mr. Amod Mulay, Competence Center Head – Enterprise and Cloud platform for Integration and Analytics in Automotive and Industrial applications, for his valuable inputs in formalizing the architecture and the white paper.
Author sincerely thanks the members of KDCP Team at KPIT Technologies Ltd. for providing all the application support, technology and architecture inputs, business domain inputs as well as access to the product literature and documentation.
Author Abhay Chaware, acquired his Bachelor’s Degree in Mechanical Engineering in Year 1999 from Maharashtra Institute of Technology, Pune, India.
He started his professional career in 1999 with Godrej Infotech Ltd, Mumbai, India and later joined KPIT Technologies Ltd, Pune, India in 2001. In his current role as an ARCHITECT in CTO office, he helps and guides Product teams, business units and practices to architect scalable solutions on premise and cloud. He specializes in web based technologies and cloud technologies.
Headquartered in Pune (India), KPIT is a global technology company that specializes in providing IT Consulting and Product Engineering solutions and services to key focus industries.
KPIT’s broad range of diagnostics & connectivity portfolio, is helping manufacturers, service providers and technicians to remotely monitor their devices/ machines, find faults faster, and diagnose errors more accurately, thereby effectively enhancing the performance of the products and the user’s overall experience. KPIT’s product portfolio of platform, tools, and applications is adopted by vehicle and device manufacturers to remotely monitor their products, develop processes and carry out diagnostics to reduce downtimes, enable predictive maintenance thereby improving profitability.
Abhay Chaware, Architect, CTO Office, KPIT Technologies Ltd.
1 likes
Next-Gen Technologies
Vehicle diagnostics
1 likes
Connect with us
KPIT Technologies is a global partner to the automotive and Mobility ecosystem for making software-defined vehicles a reality. It is a leading independent software development and integration partner helping mobility leapfrog towards a clean, smart, and safe future. With 13000+ automobelievers across the globe specializing in embedded software, AI, and digital solutions, KPIT accelerates its clients’ implementation of next-generation technologies for the future mobility roadmap. With engineering centers in Europe, the USA, Japan, China, Thailand, and India, KPIT works with leaders in automotive and Mobility and is present where the ecosystem is transforming.
Plot Number-17,
Rajiv Gandhi Infotech Park,
MIDC-SEZ, Phase-III,
Hinjawadi, Pune – 411057
Phone: +91 20 6770 6000
Frankfurter Ring 105b,80807
Munich, GERMANY
Phone: +49 89 3229 9660
Fax: +49 89 3229 9669 99
KPIT and KPIT logo are registered trademarks | © Copyright KPIT for 2018-2024
CIN: L74999PN2018PLC174192
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Leave a Reply