and the Future of Software Development
Is the evolution of business software development principles parallel to the manufacturing ones and is it predicting it? Interestingly, we can find striking similarities between the history of manufacturing since the Industrial Revolution and the past 20 years of software development.
The evolution of the customer needs and the improvement of the production techniques have driven the progress of manufacturing principles.
So, in the beginning of the 20th century, Fordism brought a new radical change: thanks to the invention of the assembly line, the principle of standardization allowed to produce thousands of identical units of the Model T Ford. The cost cutting that this new technique represented democratized the access to the goods, but with a serious limitation, namely, only one single model was available. The famous sentence from Ford underlined: “Any customer can have a car in any colour, that he wants so long as it is black”.
In software development, the first adopted approach was completely monolithic mirroring the same principle of Fordism. As defined by Gunnar Menzel in his article Microservices in cloud-based infrastructure, this one-size-fits-all model consists in developing a “software design pattern that includes all functional and non-functional features into one box”.
What happened? As with the cars, customers started to say the particular word we hear every day in software development: BUT. “I like it but…” “it does almost what I want but…”.
The “buts” in the early 1920s lead to the next step forward in car manufacturing: General Motors started to offer colours and models variety.
After the World War I, Toyota introduced a new manufacturing game-changer. Toyotism and the Lean manufacturing model put in place a series of principles centred on the idea of continuous improvement. Some of the concepts of this manufacturing turnaround were adopted much later in software development and described by Eric Ries in his book The Lean Startup.
Toyota’s inspirational manufacturing model, years later, lead software companies to adopt agile, reiterative and highly responsive methodologies to better answer their customers’ requirements and all their “buts”.
The monolithic model was no longer acceptable as the architecture of the solutions needed a finer grain. Service-Oriented Architecture (SOA) started to replace the previous approach by “exposing discrete components of an application as web services” (G. Menzel). The multi-layered approach of SOA “slices up” the solution in multiple layers of services and therefore provides more flexibility.
Smart Factory and Mass Customization
We are now entering a new manufacturing era: The Smart Manufacturing or Industry 4.0 age. The customers’ demands changed, and manufacturing companies are now facing a new challenge: how to provide mass customized products to their clients? The same question can be asked of software development: how does one provide highly customized pieces of software that can also be supported, upgraded and can be made to evolve through their whole lifecycle?
The answer resides in augmenting the granularity of the provided solutions. The microservices approach is the best answer to this question yet. As Menzel defines it, microservices are “independent application services delivering one single business capability in an independent, loosely connected and self-contained fashion”. In other words, every piece of the puzzle does one thing, does it well and is highly independent of the rest.
Getting back to the initial question: is the manufacturing evolution a good indicator to predict the future of software development? The principles of both sectors advance in parallel because the demand of customers evolves in the same direction, from mass standardization to mass customization. Mass customization of manufactured goods is only possible in a Smart Factory.
Mass customization of software development is only possible thanks to one special highly performing platform, a Smart DevOps Software factory. We will follow up in a next article about this concept.
PLM (product Lifecycle Management) is a methodology used in Manufacturing in which the central concept is Lifecycle. Life is a cycle that starts from birth and continues with growth and finally death. For human beings, pregnancy is 9 months and birth a few hours (hopefully…). On the other hand, life expectancy at birth is about 100 times longer: latest estimates in developed countries are around 80 years! Carrying and delivering a child is a beautiful and painful moment of life but it is a moment: this is just the beginning of the story, just as the first software delivery is just the start.
Our customers wish to have beautiful and healthy pieces of software that adjust exactly to their Form, Fit and Function but more importantly, they need them to grow and mature with them. Our customers live and breathe PLM everyday as a central concept of their manufacturing business. We have learned from this concept and applied it to software development by creating a unique DevOps platform that allow us to be not only the prolific parents of hundreds of apps but also the caring parents that will accompany them all along their life journey.
What is the benefit?
Our customers make tractors, jewelry, spaceships, furniture… this is their job, not to produce and maintain software. The Wincom DevOps platform and the apps methodology allow them to have their Total Cost of Ownership (TCO) significantly lower, predictable and managed.
An important secondary effect is very interestingly pointed by John Papageorge in his post “What’s the real total Cost of Ownership of on-premise PLM software?”: the economic concept of “opportunity cost”. If you invest your money in in-house customized software solutions, you will not only get a poor ROI but you will also mobilize your resources on a task that is not your core business. The money you invested in support software, if invested in innovation in your core business, may have helped to make your company even more competitive in its industry and therefore increase your revenue: yet, somehow, you missed the opportunity.
A highly valuable lesson we learned from our manufacturing customers is that the birth is the first step of a long life(cycle) journey: this allowed us to be the parents of almost 300 software babies and more importantly support them as they grow up.
This post is the first of a series on how to apply manufacturing concepts to software developments.
I don’t know if you are much into fashion, but last year women jumpsuits were quite trendy and like pretty much every girl I know, I got one. Last week I wanted to wear it for a party, but due to too much good food over the holidays and a January lack of motivation to go the gym, it was a no go. I bitterly thought: I should have bought a pair of trousers, a blouse and a jacket as some of it would still probably fit!
As you are not reading Vogue but a blog about PLM, you are now thinking: what on earth is the connection with software? As a software company we are often given a list of features that a customer needs and they expect us to build a single solution to satisfy these requirements.
The way this is normally done is a project is started and after several months of hard work, brainstorming, meetings, development, more meeting, testing, feedbacks, more meetings etc. you finally get a piece of software that more or less fits the original need.
However, the business, the requirements, the teams and related software systems never stop evolving and almost from day one, the delivered solution needs to evolve, enhance, extend and even some features need to be disregarded just to keep up; not to mention the inevitable upgrade of the PLM.
That usually means more brainstorming, more meetings, testing, feedback. Very soon the total cost of ownership (TCO) starts to spiral out of control.
A fashion tip: Avoid jumpsuits and invest in more flexible solutions that can evolve with your shape and form… or your clothes will stay in the closet.
That if instead of purchasing this jumpsuit piece of software your requirement is divided into a series of apps that complement one another and, all together, provide you the perfect outfit? This solution has a lot of benefits:
– The apps implementation and deployment is much easier and therefore less costly
– When you need to extend or enhance one aspect of your initial requirement, this can be done on just one app adding or changing this specific feature. This “surgical” approach also means a significant time and cost saving.
– If you no longer need part of the features you planned in the first place, the corresponding app can simply be uninstalled without touching any of the others.
– The result is a much more managed TCO and a big decrease of the associated ownership headaches
Apps are the trend in software, but to successfully deliver an app based solution means we must use a DevOps platform built using PLM ideas to make this a concept a reality and get us away from the monolithic spaghetti that most system turn into.
A fashion tip: Avoid jumpsuits and invest in more flexible solutions that can evolve with your shape and form… or your clothes will stay in the closet.
In your company you probably hear this type of question: “Where are my changes?”, “What is taking so long?” or “What is holding up the process?”… and many people saying basically “I don’t want to understand Windchill, but I need the information it holds”. That’s why we created the External App Platform where your management area, your services, your external partners or whoever you decide can actually see all this information without having to dig into the Windchill complexity to find it. As every customer has unique needs and requirements, we brand and configure the platform exactly to your needs. We can configure as many Platforms as you require: your executive management will not need to access the same information as your external partners!
This is why the External App Platform (EAP) is a Service-as-a-Product: You get exactly what you need, but it is supported and maintained as a product would be.
The EAP has different modules that you can plugin. The first one is Data View: it allows you to visualize the information you need about changes, state, etc. in graphs and tabs in a very user friendly interface. Every single column, row, type of graph is entirely configurable, and the best part is that you can export all this information in different formats (.csv, .xls, .pdf) and with our report engine even produce a hard-copy report:
Through our second module, Search, you can navigate easily through your structures:
Portal, (our third module) allows you to have instance access to CAD and Part data, and to the 3D visualization. More are in progress, such as a new Workflow Visualization module:
Finally we can make modules built-to-order, to match exactly what you might need.
Our EAP is currently in production in many large, international manufacturing companies from different fields as Agroindustry, Defense and Furnishing.
Interested? Request an online demo or an evaluation copy here! And please take a look at our App Center.
After 15 years working in PLM, we have seen that many PLM implementations are being delivered late and only after lots of pain for everyone involved and some projects fail to deliver anything at all. So why is PLM so hard to implement? Why are the projects almost always late? And how can we deliver a successful PLM on time?
Why projects are late and/or fail
Here are some common reasons why a PLM project is late or fails:
- Resources: getting the right ones with the right knowledge
- Unrealistic expectations of management and users
- Under-estimating the complexity of PLM
- It’s the fault of the software: we should have bought PTC, Dassault, Siemens etc.
- Delays of vendors to fix “critical” issues
- Technical problems that delay migration of data
There are many more but in essence it comes down to Project Management and resources, and in PLM the way we manage these things doesn’t seem to have changed in decades.
Let’s start at the beginning, scoping the project: how long will it take and how much will it cost?
Often, implementers are obsessed with counting man days. Most PLM projects are scoped in the same way, the implementer spends days or weeks producing a huge excel spreadsheet that has a large number of line items specifying each activity and the number of days that each will take. They add them up and triumphantly present a huge number of XXX days. The customer is shocked at the huge number and the next few weeks are spent trying to find “savings”. It is an archaic way of defining a project that assumes we know everything from the beginning. But how can this be right? How can we know what we don’t know before we even begin?
Over time, lots of project management ideas have come and gone to solve this failure to deliver PLM, such as Platforming, Critical chain, etc. which certainly improve things, but I propose we run our PLM projects as if the PLM team were a “Startup”.
“The Lean Startup” is one in a series of recent books that try to help entrepreneurs create and manage successful tech startups. The book has many ideas, but a key one is to get something to market as soon as possible so you can learn what you don’t know. The idea is not new, e.g. Agile software development methodology also promotes it, and ironically these methods basically reuse many ideas from Lean Manufacturing that the manufacturing industry itself invented. In “The Lean Startup”, this idea takes the form of a concept called the MVP, the Minimum Viable Product.
I propose we need to think about our PLM projects as the Minimum Viable PLM to make our projects a success.
What is a Minimum Viable PLM?
A Minimum Viable PLM is not a prototype, it is not a “bad” implementation, it is a production worthy system, but we have cut down the requirements to the core. Once it is in production, we rapidly move to improve the solution, fix issues and add features that make our PLM the fully featured system our users require.
So we need to learn from our users and fix our mistakes, but we also need to be brave and make decisions (even the wrong ones) to avoid analysis paralysis. It does require a responsive dynamic team to make it work,
So the question is “Could it work?”. Our experience tells us it does, as the following case study illustrates.
MVP Case study
A medium size French company that makes postal machines, implemented a successful PLM in 1 year, How?
A project manager who ruthlessly defined his MVP
An infrastructure that meant they could change their decisions, quickly and rapidly and learn from mistakes
A small highly focused team
A mechanism to release new software updates almost daily
Really good software people, a skilled implementation team (Accenture) and a fully engaged customer
At go live, our Minimum Viable PLM looked like the image below and was successfully used in production.
But that was not the end; two years after go live, it now looks like this:
Over time there have been issues but these have been rapidly resolved. More features are constantly being added and some will take more time such as a key goal to radically change the way products are designed using ETO (Engineer-to-Order); some things in PLM are just plain difficult.
MVP really works, we have seen it work and we know that even in projects that are traditionally scoped, it rapidly takes hold, by simply providing the tools to iterate quickly. However we must have the project management and technical expertise to support this concept and the bravery to make decisions that may be wrong.
Through our work in extending and customizing PLM we are constantly in conversation with customers and Value Added Resellers across the world about what companies are doing today, planning tomorrow and the direction that they are trying to take their PLM systems.
In this article I will try to summarize the key day to day problems that concern our customers, namely efficiency, and the drivers that are pushing them longer term, namely innovation.
One key driver in this area is user interface efficiency; with the advent of sophisticated mass market websites like facebook, linkedin and salesforce, users expect and demand software that is easy to use. Many requests we get are simply to reduce the number of clicks required to perform an action. PLM vendors have responded to this and core product interfaces are improving, but some tasks are still mysteriously complicated.
Customers all do things slightly differently and so specific user interfaces are in demand, often to improve the interaction with users that have frequent, repeated tasks to do, e.g., creating data, mass action such as printing or applying business validation rules in the user interface to avoid complex and time consuming workflow processes.
Efficiency is also required for inter-system communication: almost all companies have the classic PLM/ERP interface requirement but many wish to connect other systems as well. The Windchill Problem Report object can be used to start the change process, but users will no longer accept the need to switch systems and double-enter data.
They want their issue tracking system to be integrated and “send” the request to the PLM. There is a frequent requirement to integrate software within the product development process and we have implemented a number of targeted solutions to address this.
This integration is also a key selling point for vendors, as shown by PTC’s acquisition of ThingWorx and Integrity.Technical publishing and distribution of CAD data is another active area where efficiency is a key point. For products to be shipped quickly, manuals need to be updated, translated and delivered with accurate information.
The information needed is in the PLM and closer integration and improved connectivity to the document authoring software is important. For example, we recently streamlined the translation process of the manual of a company as this was seen as a key cost and delay for the product release process.
Another frequent requirement is to access product data in an easy way without entering the PLM, such as using custom portals. These are used by users from ERP or services to gain access to drawings and 3D models, often these portals need to be mobile compatible. As data is moving outside the PLM realm, protected IP has become a greater focus, with attention being paid to solutions such as watermarking.
So if we look at the underlying drivers of all of these requests it is that organizations are under pressure to collaborate better and to access and efficiently use the product data. PLM is becoming a truly enterprise tool and less of a CAD oriented work group manager.
The PLM managers we speak with are swamped with new requirements, increasingly coming from outside the engineering department.
Being the first to get product to market is clearly vital, but as important is creating the “right” product. Whilst efficiency can be seen as a tactical improvement within the organization to create products faster and with improved quality, innovation is a strategic goal, to make better products. This is the driving force behind another set of requirements that we get.
The companies that we often work with see their competitive advantage coming from being able to be more reactive and nimble than their competitors. Their ultimate goal is to respond to their client’s needs and provide them the perfect product as quickly as possible.
This is the holy grail of PLM and is known as mass customization, other related terms are something-to-order (Design, Engineer, and Assemble), options and variants, product configurator. Whatever the terminology used, the objective is the same, to create products that are tailored to the customer.
The problem is that mass customization is not easy, for many years PLM vendors have tried to put solutions into the market place and generally failed. The reason is that this is a problem that touches every part of the product design process, from the way the CAD engineer designs to the way the product is serviced.
Most organizations know this is a puzzle that must be tackled piece by piece. The first piece that has a lot of attention at present is the Bill of Materials transformation process, for example eBoM to mBoM. For a surprisingly large number of clients, this is a painstakingly manual process, and therefore there is a lot of interest in products such as PTC’s MPMLink.
We have a number of clients that have or are in the process of implementing it; the current version is somewhat “quirky” although the underlying principles of the product are considered impressive. A new version is due soon and is eagerly awaited. Other customers using Enovia are also talking about similar BoM transformation issues.
So in essence the long term goal is mass customization, and to achieve this, companies are focusing on better product definition and data transformation tools.
Product data is seen as more and more valuable and therefore the role of PLM in the organization is on the rise. PLM is being put under pressure to deliver a greater diversity of products faster and with better quality. Unfortunately this requires a much better automation of the product development process from cradle to grave. We still have a lot of work to do!
Just in case you have not noticed, the world has gone mobile crazy. Mobile apps are now key business tools and Engineering is no exception. This raises an important problem: mobile by definition means that your data is going mobile; but how do we protect the Intellectual Property (IP) of this data? PLM holds data in the form of CAD Models, drawings and documentation, which is some of the most critical data that the company owns. Protecting IP is not new, for example, in 1967 Israel’s Mossad allegedly stole 3 tonnes of drawings of the Dassault Mirage fighter and effectively copied the aircraft, it was a huge operation; now someone can have 3 tonnes of drawings on their phone! Going mobile clearly poses a new set of problems.
Financially IP theft is not a small matter, the U.S. Commerce Department has estimated that intellectual property theft costs the economy more than $250 billion and 750,000 jobs annually and the International Chamber of Commerce estimates that the global fiscal loss is more than $600 billion per year. In another example, it is estimated that the worldwide turnover of fake automotive parts and components amounts to $12 billion a year, of which $3 billion is in the USA alone.
Protecting mobile data
Before we can protect our valuable data, we need to understand how it is used. Who needs access? What do they want to do with it? and where do they need the data?
The following are some examples of data going mobile:
- Managers making approvals whilst travelling
- Shop floor access to drawings and
- CAD models to check details
- Service technicians on-site repairing products
- Providing designs to OEMs and third parties to outsource manufacturing
- Design reviews with clients on-site
For our organisations to thrive the product data must be mobile, but how can we protect this data in the wild?
First we need our design data to be housed in a safe place and most companies have agreed that PLM is that place. All PLMs have access control mechanisms, usually a complex matrix of users, teams, folders and life cycles. Using rules we restrict user’s access to data. The first stage to protect data is to make sure it is carefully organised within the PLM. For example, in Windchill “containers” (aka “contexts”) were introduced some years ago to assist this this, and now form the backbone of the data and team organisation in the system and therefore the underlying access rules. However mobile adds a new dimension to access control, “If I want to see a drawing on a tablet, it is because I want to move it somewhere” and this needs us to have more that static data management rules.
In the Clouds
“Let’s keep it in the cloud, it will be safe!” this seems like a legitimate answer but it is not that simple. Unfortunately even if the data is stored in the cloud, the adage “If I can see it, I can copy it” applies. The data itself is fluid, it moves easily. For example, if I look at a drawing on a mobile devise a copy is downloaded, even if I try to prevent the user accessing the downloaded copy I can’t stop them doing a screen shot or even simply taking a photo with a high resolution camera.
The data is on the move, wherever it originated.
With so much at stake the industry has concentrated on locks, preventing unauthorized access to data that is mobile. A number of commercial solutions exist, to encrypt and password-protect files when they are viewed or downloaded, the viewer will attempt to limit the users’ ability to make changes, cut and paste, save etc. All the major CAD vendors have these types of solution. This software uses closed applications and proprietary file formats to limit access; perhaps the best known widely used closed application is Adobe’s Acrobat PDF viewer. Many companies use PDF to provide read-only access to drawings and documents but there are many other applications specifically for CAD data. It should be noted that however hard we try, we can never get past the “If I can see it, I can copy it” rule. They are many other techniques used to try to protect data.
Watermarking is a very active area of interest for many companies; a mark is added to a drawing which overlays additional information and in doing so makes it harder to copy the image. Using another approach some software providers have investigated Digital rights management (DRM), or more accurately IRM, but most seem to have rejected it as being too complex to administer.
Finally we need to consider the human factor, an Ibas survey (www.ibas.net) shows that only 28.2 percent of business professionals commonly think that intellectual property theft is completely unacceptable, and the most common thieves are the IT folks themselves, so maybe locks on data are not the only answer; at Wincom we are looking at other ideas.
The first is to make it hard work to copy the data. Many 2D drawing formats are vector based, meaning that the file is effectively as set of instructions on how to draw the drawing. This makes the result, small, fast and scalable, examples of this format are dwg and svg, however the problem is this format is very easy to copy, and even the watermarks are relatively easy to remove. Converting vector to raster when sending content to mobile devices makes it much harder to copy; albeit at the cost of larger file sizes.
Another technique we have adopted is called “personalised watermarking”. Wincom watermarks are applied at the moment a user views or downloads a drawing, and includes the name of the user and the time and date. This will encourage users to value and look after the data properly.
In addition to standard watermarks we also incorporate “hidden watermarks”. Once the data is in a raster format we can add information on the drawing that the human eye cannot see and is embedded into the data. This means that if the data is copied in any way, and then we get the hold of the copy in future we can identify who, how and when the data was copied. Having a leak is bad enough, not stopping it once you find it is worse.
For our clients we use a mobile PLM framework and a secure content server, which allows us to create custom task oriented apps, giving the user quick access to only the data they need to do their job and provides that data only in a secure format.
Accept the data cannot be 100% secure if it is mobile, but we can to make it hard to make unauthorised copies using raster formats, watermarking and closed applications. The next step is to get employees to have ownership and value the data they use. Keep data well organized and access control rules up-to-date. Finally give users access to only the data they need with task oriented mobile apps, which will make them more productive and reduces the risk of IP theft.
There is a relationship between PLM and ERP and therefore a need to integrate the two systems; at Wincom we create 4 or 5 of these integrations every year. I’d like to share our experiences with you.
ERP is all about efficiency, using the resources of the company to their maximum potential; PLM is all about managing creativity, ensuring that the innovation of a company is nurtured, but kept under control. Linking these systems together is a bit like a marriage, where each partner has their own personality and slightly different goals, and both need to work together to raise successful products. PLM systems have a tendency to want to over communicate, whereas ERP systems often think they don’t need to listen to the rather bothersome PLM. In fact, like any successful relationship, it is vital that there is good communication between them.
Where to start…
It seems that we need to connect the systems together, and if so, we need to answer some basic questions about this interaction:
1. How to send the data
2. What to send (and receive)
3. When to send the data
Before answering these questions, we must realize that the relationship between the two will take time to be established and will evolve. We need to begin with basic information, transferred as it is needed. Almost always, we need to send product data from PLM to ERP but later as the relationship matures we need to send data back from ERP to PLM. Parts and BoMs and information about availability, suppliers and costs can be vital information sent back to PLM to help the product designers.
How to send the data
The first problem is that the IT and ERP teams often expect that they will have large amounts of transactional data; this drives their technology decisions which then often tend towards the most mature data transfer technologies available (e.g., Tibco) that were primarily developed for banking applications. We have often seen heavy, expensive middleware solutions implemented at great expense that were designed for high volume secure banking transactions. But, is this what we really need? How often does the product get released or changed? Do I need to transfer this data every 10 milliseconds? Clearly the answer is no.
Recently the technology landscape has changed and now the ubiquitous web service is often put forward as a solution, but even this involves a degree of complexity. The place most companies actually start from is the basic idea of a single file exchange. Frankly why not? This basic approach helps to create a common vocabulary and put in place working practices that can evolve into “proper” interfaces. XML is a good way to format the data, but at Wincom we are often asked to use the humble comma separated file (CSV). (We even had one company that told us to create XML and then they quietly transformed it into CSV as that is the only thing their ERP developers actually understood)
What to send
So we don’t need a high speed middleware bus, but we do need to send accurate and sensible information. To begin with the ERP people need to understand a little about the PLM obsession with history (a good PLM is like an elephant, it never forgets) and the PLM team needs to understand the ERP focus on the here and now.
A good understanding of revision schemes, effectivity and change models all become important. It is interesting to note that the same words can mean different things to different people, so nothing should be assumed. Finally, it should also be noted that a working, formal change process is a normal pre-requisite to an interface, as it is only then that the “creative” product data is under a sufficient level of control to be of practical use to the ERP.
With a common vocabulary in place, a formal declaration of the nature and format of the data to be exchanged can be agreed.
When to send the data
Finally “when” is important; the trigger to send the data usually comes from the change process of the PLM, often as a change reaches a certain level of maturity in its lifecycle. The user interface may also allow authorized users to trigger a transfer of data with a custom menu option. Tools are sometimes needed to trigger bulk transfers of data. Data may be transferred as little as once a day but it needs to be sent automatically and securely, with complete traceability. But, as we said before, we have no need for high volumes of data to be transferred instantaneously.
Conclusions and future
Once the two partners have started to talk and begin to exchange information, it is not the end of the story. There are huge benefits to be gained by expanding the way the systems communicate with each other, and also to get the ERP to open up and send data back to the PLM (such as supplier data). If we can have open, clear communication we can start to get real benefits and raise fit, strong and healthy products.
How can the extended enterprise leverage PLM?
These are real questions from real customers we have been asked in the last year.
– Our service engineers on site in Africa need to see the latest drawings.
– The shop floor needs easy access to the latest process plans.
– The purchasing manager wants to see if they should order 10 or 10,000 parts
– The engineering manager needs to approve a design whilst getting on a plane
A fully featured PLM such as Windchill or Ennovia has an interface that scares most people, even inside the engineering community. However we need this power to answer our complex engineering questions, but PLM also holds information that is crucial to the success of the extended enterprise, so how do we make it more accessible?
People use technology differently now; apps on mobile platforms are one clear example. “I don’t need a map unless I am studying geography; I want to get from A to B”. In PLM terms: “I don’t want to learn about effectivity; I want to find a drawing”. Simply put, an app is a highly focused interface that does one thing well, like find a drawing.
Interestingly this shift is happening everywhere, as pointed out by Oleg Shilovitsky, PLM Think Tank in his article “What Social PLM Can Learn From Facebook Decline?” where task oriented apps, such as messaging, are being used in preference to Facebook in some situations. Now Facebook is not about to go bust, but apps like WhatsApp and Line are definitely on the rise.
PLM is full of very useful information, but it is really used primarily by users who live and breathe engineering. A simple app does not compel users to enter the PLM, it gives them a limited view of the PLM, focused on what they need it for. “There’s an app for that”. A classic example would be a mobile or tablet app and each one is designed to help with a specific task, however apps in business should not just be mobile, they may also live on the desktop but they are always simple to use and highly focused on an activity. If you think this is another fad then think again, you probably already use a task oriented app, configurable reports are simply non-interactive “apps”
Maybe if we can deploy a few strategic task focused apps in our organization, we can provide the extended enterprise a vital source of information, without teaching them all PLM. (they don’t want to learn it anyway)
Vendor Mobile apps
The big mistake PLM vendors make here is to try to make a mini-PLM. Mobile is different, and the applications should be task oriented, is it really important to have a 3D model on my iPad (except because it looks cool)? Custom apps can help here, and the PLM eco-system is starting to create a number of apps that plug into PLM’s like Windchill.
For example an app was developed for a Bell Equipment that makes trucks, big trucks, which are used across the world. Bell wanted to put in the hands of their service engineers a tablet app that given a truck id, they could have instant access to the manuals and drawings. Interestingly a future update will be an offline mode, as the customer pointed out to us, they have tablets but there is no high speed internet in the jungle.
Searching outside the box
Searching for information is another classic app and there can be configurable apps that show different searches to different users. The PLM administrator configures searches, each one is for a different task, some via mobile and some using the desktop. Shop floor can find process plans, purchasing can check part usages etc. etc. Nobody has to access the PLM to find information, simple apps, quick answers.
Watch this space…
The world is not going to stop using powerful PLMs or that apps will replace them, but the way we interact with information is changing and engineering data is no different. Also it is getting dramatically easier to create apps and coupled with a PLM like Windchill with its open architecture there is a real return on a quite small investment. Technology is moving fast and if we can “think differently” maybe we can apply it to finally get PLM to make its promised impact on the extended enterprise.
Wincom Translate Update
A new update (1.2) for this component has been released
- New show content XML feature
- Improved export and import
- Improved documentation
- Improved import zip error checking
We have a new component translate, that extends Arbortext document management to allow the users to create new translation structures and export them to the translation teams.
We have added the ootb edit filter window, to allow the users to manage the structures.
Note: For the technical among you this popup is GWT based and this solution requires a degree of manipulation which is why PTC themselves use a JSP popup in similar cases rather than the GWT solution. It shows the complication of the mix of technologies that is is used in the 10.x interfaces.
A new wincom component management console is now available to our clients. It includes the following features
- Improved logging
- Improved useability
- Improved installation process
Wincom Component Manager
Wincom provides new Windchill extensions to closely integrate Creo with Windchill PDMLink. These extensions can be customized to the clients requirements, and provide CAD user highly productive ways to interact with the PLM system, ensure high levels of CAD user satisfication and participation in the PLM processes.
- Quick and easy installer for Creo clients
- New custom ribbon in Creo to give instance access to PDMLink
- New features to create parts, documents etc. directly
- Framework allows addition of custom features
- Direct access to simple to user interfaces
Creo Create Part
Creo Save As
Please contact us for an evaluation version of this component
We have begun work on a new component, it is designed to improve the visibility of the MPMLink manufacturing data. We know many clients have a critical need to transform and manage the eBoM to mBoM process, and often use MPMLink for this. However as an old style applet it suffers from various usability and performance issues and although most customers agree on the underlying concept, the issue is with the implementation.
We are just started to analyze the data model and added new interfaces to expose this data in the standard PDMLink interface and are planning other tools to eventually assist in the BoM management itself.
eBoM to mBoM view
mBoM to eBoM view
If you are interested in this project we are looking for interested parties to help us define the scope and are also looking for beta testers.
This new component is perfect for implementation that do not use the full out-of-the-box change process. Normally the change process is used to revise items, however without it the items are revised directly by users without any additional checks. By reusing the standard promote user interface, we have enhanced this feature to not only promote to a new state, but now we can revise a number of objects. This allows the PDMLink administrator to designate approvers per product to approve these revision requests. Check out the video and the product pages
Many clients want to add or remove dispositions; these are data values held on the links between the tasks and affected items. The out-of-the-box values do not suit all circumstances, and so the customization guide helpfully advises on how to change these but… it only says that some java skills are required. This is not true, this change is complex and requires modification of 4 separate interfaces some of which are not easy to do and require detailed customization knowledge. The guide is not complete and does not mention the change notice interface and some other mechanisms the user has to both view and update these values. We created a new component to allow clients to use a simple and easy to install component to make all the necessary changes without any detailed technical knowledge. It can be configured to match the exact needs you have with soft types and some basic changes to specify the new columns required.
More information is available in the component details page
We recently completed a feasibility study on how to upgrade a PDMLink system with a major customization. Here are some extracts of that report, it is clear the over time many systems, even with the best intentions, degrade and become less an asset and more a liability. When we start to customize our systems we do not think of what it might become in 5 or 10 years, at Wincom, “Cost of Ownership” is a constant and ever pressing concern.
So the question is how do we recover a system, especially to allow us to upgrade the customizations to later versions of PDMLink?
Here some key concepts that came out from our study of the upgrade from 9.1 to 10.x
Key Strategy Points
- Limit risk, this will not be a rewrite of the code of the system
- Take advantage of the new R10 features
- Make some limited tactical improvement to the system
- Map a upgrade plan with a clear time line and effort
One clear objective of the client is to limit the risk of the upgrade. This is due to the complexity of the system and the difficultly of re-testing and so the minimum changes to the system the better.
The final stategy was an As-is upgrade with the application of the system using the Wincom Component Architecture.
As-Is Code Upgrade
As we have seen the code in the customizations is complex and highly interconnected; we could refactor the code to reduce this complexity but this will introduce the risk of creating errors within the system. A major refactor will also prevent the merging of changes that will happen in the core code during the upgrade, forcing us to introduce an early and extended code freeze
We have concluded that code upgrade must follow the rules above however this does not exclude the possibility of componentizing the code and this is exactly what is proposed. From an analysis of the code is has been seen that components can be identified. A component is defined as separated features and functions that can be upgraded, tested and installed as a single entity. There is a clear a number of these in the system and code review shows these are possible to identify, however there must be some clear stages to this componentization strategy.
We define three types of component that will exist in the final system
||A component that has a real meaning to the users, will include a user interface
||A component involved in the communication between systems either import, export or both
||A supporting component primarily used to hold shared resource between other components
- We can upgrade in stages
- We can work on more than on part of the customization at once
- We can manage the project risk
- We can estimate the project better
- We can we sure of our progress
- We can retire components
Component Overlay Procedure
The key concepts we have introduce thus far, are an As-Is strategy for the code and to introduce a component architecture. The component architecture will be defined gradually and be in part driven by the existing code structure. However there is a clear objective to create a base technical architecture upon which to build the application and interface components.
The overlay strategy follows four steps
- Snapshot the current system
- Create the base configuration component
- Component definition
- Create the application and interface components
But will it work? to answer that question we took an example applicaiton and interface component and applied the proposed stategy.
The first stage of the feasibility study was to create the configuration and utilities components. These are technical support components that are required to support the majority of other components. The separation of these components was clean and there was no significant issues. One major interface to TeamCenter and another significant UI were componentized and upgraded.
Overall the feasibility study achieved it’s goal of identifying and qualifying the risks involved in the upgrade project. It is clear that the majority of the upgrade is technically feasible and there are only two major potential risk areas. However these do not pose a significant threat the project and even in the worse case scenario would not significant extra cost to the final project.
The road map for the application upgrade can be highly made in a non-sequential manner. After each component is identified it can be given to a developer to upgrade and the testing and validation may also be achieved independently.
We can at a high level defined two technical teams that are required for the upgrade to be successful, for this roadmap we define these as follows
||This team is close to the users and application and has no visibility of code. They perform the validation and integration of the components.
||Effectively the developers responsible for creating the component, performing changes to upgrade the code to R10.x and doing code verification. This team does not need to be on-site.
The testing of each component must be done both during development on a clean data set and then on a clone of the real client database. We use standard software testing terminology to distinguish these test stages of verification and validation.
||The upgrade team testing the code does what they expect. Often we use the phrase “Are we building it right?”
||The application team testing the code does what they expect. Often we use the phrase “Are we building the right thing?”
||The final validation of the complete set of components working together. Some integration tests may be performed on a subset of components. This will be the responsibility of the application team.
An example of the concept for the validation testing environment
Technical Project Plan
The technical project plan does not specify a time line, it specifies the sequence of steps that must be performed to upgrade the application. It does not include any steps related to the infrastructure upgrade.
The structure of the plan is shown below, is shows some major steps such as this feasibility study which are pre-requisites for the rest of the process, however once the upgrade of the components is started these can be done in parallel by one or more developers
The upgrade of the a complex custom application can be achieved with an external upgrade team with minimal interaction with the client during the code upgrade. The feasibility of upgrade has been proven and can be achieved in a predictable time scale with minimal risk. There is a need for an on-site application team that will have responsibility for validation and integration.
We have added this new component to our portfolio, it is very easy to use, adds a vital ingredient to promotions which improves user statisfaction and reduced workflow complexity. This component makes sure only data which comforms to you business rules is promoted. Check out the new video demo to see exactly how it works, and please send us an email with your validation requirements so we can verify them and arrange an evaluation version.
We have released our first three videos. Wincom Welcome, Flexible Light Search and Wincom Console Management please go to our video channel to check these out. We will be releases many more in the next weeks around the concepts of business processes and migrations, so make sure you either follow us with linkedin or twitter @WincomCo
We have a new website, which is not just a new look and feel but also a way for us to showcase our new components. 2012 was a busy year, we expanded, moved offices and have made a lot of innovative new components, and so in the next few weeks we will be adding videos of ready to use components, such as Wincom Welcome, Flexibile Light Search, Wincom Migrate and Wincom Promote to our site. Please come back and take a look and see what you think.
Follow us on LinkedIn for updates