Feature Articles

Published: January 23, 2013
Find more content on:
Find Qualified IVD Industry Suppliers at Qmed:

Three Strategies for Assessing IVD Instrument Feasibility Early in the Design Process

Optimal architecture design enhances rapid product development and product performance.

By: Spencer Lovette, Jack Kessler, and Jerry Sevigny

Rapid product development reaps well-known rewards: accelerated time to market and lower development cost. Successful

Figure 1
Figure 1: Example of a common IVD instrument architecture using a heated wheel of cuvettes with a reagent dispenser at a fixed position. The incubation wheel can hold up to 15 cuvettes. Each cuvette passes out of the incubation wheel block when its duration time has expired. The sample generator creates items (cuvettes) that flow through the model. Cuvettes are transferred into the model at a fixed cadence (one every 10 seconds, for example), if there is an open slot in the incubation wheel. The assay assignment block assigns the type of assay (1 or 2) and the associated incubation times to each cuvette. The order of assay type and corresponding incubation times can set to be random or defined; in both instances, the overall distribution can be specified (for example, assay type 1 is 30% of all assays).

rapid product development relies on the quality and completeness of the information available early in the product development process. For IVD instruments, understanding user requirements, selecting and optimizing the hardware and software architecture, and leveraging off-the-shelf technologies are essential elements of a successful rapid product development program. This article will explain how timely application of these three strategies can minimize design changes, speed time to market, and, most importantly, allow OEMs to meet user needs at a reasonable cost.

Understanding User Needs

We’ve all seen that classic cartoon depicting the various departmental interpretations of user needs for the tire swing product.  Assigning marketing, engineering, or operations as the sole source for defining new product features and functions can yield not only radically different products but, more importantly, products that do not meet customer needs. Human factors and usability are now recognized as critical elements of successful medical product development. Developers of IVD instruments must demonstrate and document how human factors and usability considerations are part of the development process.
Aside from meeting regulatory requirements, performing user needs field research and concept preference testing early also makes business sense. Proposed instrument feature and function confirmation can be obtained early in the conceptual stage of instrument development to avoid costly major redesign iterations and schedule delays downstream. The risk of poor market acceptance after an instrument launch is greatly reduced by engaging a representative sample of users and listening to the voice of the customer. Early interactions with the customer and user provide and vet a more robust definition of the desired instrument feature and functions.
The quality and clarity of the desired IVD instrument features and functions that result from these early efforts enable the development team to then conceive, analyze, optimize, and select suitable hardware and software architecture solutions.

Optimizing Hardware and Software Architecture Solutions

Architecture development is a key determinant to success. Architecture also determines how well the final product satisfies feature and functional requirements. It is increasingly expensive to modify architecture as instrument development progresses. Therefore, optimizing architecture during the concept phase is critical to rapid and low-cost development.
Simulation modeling greatly enhances architecture development during the concept phase in the following ways:
Enables candidate architecture evaluation. Simulation provides a platform to quantitatively compare tradeoffs involving operational performance (throughput, operator interaction, flexibility, and so forth), process technology (cap piercing versus removal, disposable versus washable elements), and other parameters (cost, complexity).
Facilitates architecture refinement. Simulation exposes architecture-related deficiencies in system performance. The simulation model identifies specific bottlenecks in the process flow and guides designers in addressing architectural and process limitations.
Prevents surprises. In the course of creating and using a simulation model, the designer discovers details, behaviors, and interactions that would otherwise be discovered after a system was built.
Enables evaluation of control schema. A simulation allows exploration of different processing and control approaches. This is particularly valuable in the development of complex instruments such as random access systems, for example. An accurate model allows quantitative evaluation of various scheduling algorithms on candidate architectures.
 

Figure 2
Figure 2. The simulation model is simply a series of activities, each with a specified maximum capacity. When each sample’s duration expires it passes to the next block if downstream capacity is available. The control algorithm must look ahead to all of the samples in each activity and only release the next sample if its resource requirements match the available slots in the future. If there is no match, the time slice is skipped.

Simulation is performed by creating a computer-based model that describes the interaction between things (samples, reagents, and disposables), resources (sample and reagent racks, ovens, pipettors, transports, detectors), and activities (heating, washing, mixing, detecting) in a manner that mimics the interactions of a candidate system architecture. By running the simulation model, the designer measures parameters such as throughput or resource utilization and observes interactions and conflicts that describe the performance of an instrument once it is built.
Consider an assay that requires transfer of either one (assay type 1) or two reagents (assay type 2) into a sample. Each assay type needs to be incubated at 37°C prior to the subsequent assay step. A common architecture might use a heated wheel of cuvettes with a reagent dispenser at a fixed position. Assuming a common incubation time, some cuvettes will occupy the wheel for one revolution for single-reagent dispenses (type 1); others will make two revolutions for two-reagent dispenses (type 2). Cuvettes would be removed from the wheel for successive assay steps after their final incubation.
This architecture can be modeled very simply as shown in Figure 1.
 

Simulation Enables Candidate Architecture Evaluation

The simulation model could depict how throughput varies for successive runs of 100 assays randomly generated with a fixed distribution of assay types. An experienced simulation modeler can rapidly investigate what ifs.

•  What if the assay mix of two reagent assays changes from 30% to 50%?
•  How many slots are required to ensure 60 cuvettes/hour throughput for a 50/50 mix of the two assay types? At what mix does the throughput fall below 50 cuvettes/hour?
•  What if the incubation times for the reagents differ by 20 minutes? Will a second reagent dispense location address that? An R-theta pipette can intersect the wheel at two locations; but will we really need two dispensers to dispense first and second reagents? If we use two pipettes, will it create any bottlenecks?

Even a simple model can enable a design team to compare the performance of different architectures with differing associated costs. Of course, this example is a very simple scenario that may be analyzed without a model. However, the architectural complexity of real systems frequently exceeds the analytical ability of linear tools such as spreadsheets.
Model complexity typically increases over the course of product development. Simple models that approximate candidate architectures are used to evaluate alternate conceptual design candidates. These models must capture the essential functional differences implied by the different architectures (serial versus parallel processing, cadence versus variable process timing, interdependent subsystem interactions, and so on).
When one or several promising architectures emerge, these models are revised to more closely represent the processes. Subsequently, when the final candidate architecture is selected, the model evolves from an architecture evaluation tool into a systems timing analysis and specification tool.
In this mode, the simulation confirms overall timing and throughput requirements and generates subsystem timing requirements, which serve as the design inputs for each subsystem. During later design phases, if a design is unable to achieve its timing goal, we return to the simulation model to assess the impact and investigate alternates to mitigate the shortfall. Perhaps another module can compensate by exceeding its timing target. Obviously, it is speedier and less expensive to assess options in a model than with revised hardware and software.
With the right expertise, simulation models are rapidly constructed and serve as extremely productive analysis tools in selecting and optimizing the best instrument architecture.

Simulation Prevents Surprises During Design and Integration

During the course of enhancing a simulation model to more accurately reflect the instrument process, one typically discovers bottlenecks or interaction conflicts. Prior to simulation, these surprises often were not discovered until hardware and software integration, which is painfully late in the design process.
Here is a real example of the benefits of using a simulation model in an automated sample preparation system. This model includes a robot to transfer samples between several processing stations, a sample and reagent pipetting station, a mixing station, a filtering station, and a centrifuge. The initial simulation assumed a simple delay to move samples from station to station. Later, the model was refined to track each sample individually and model the robot as holding only one sample. In some instances the simulation would stall. The model disclosed that an additional queuing location was required to make room for an outgoing sample before a second incoming sample could be brought to a receiving station. We were able to evaluate the throughput impact of a robot end effecter that could pick up and swap two samples. This would cost more than adding a holding location but brings an additional benefit of higher throughput. The simulation identified a design concept limitation, which spawned an additional cost-benefit tradeoff evaluation. The solution was to add the additional sample queuing location. The important point is that this feature was identified, optimized, and characterized using model simulation, saving significant designn the simulation model to yield optimum performance, long before the hardware or software was designed. Ultimately, those rules were incorporated into the final instrument software control algorithms. This early understanding of hardware and software architecture relationships clarifies design requirements early in the concept evaluation stage.

Simulation Serves as Laboratory to Develop and Evaluate Control Schema

The preceding example touches on simple control schema development. In the next example, samples are moved on slides through successive heaters and reagent baths. Because there are several protocols with varying times, conflicts are inevitable without a scheduling algorithm that only starts a sample through the process if it won’t cause a conflict. The simulation model is simply a series of activities, each with a specified maximum capacity. When each sample’s duration expires it passes to the next block if downstream capacity is available. The control algorithm must look ahead to all of the samples in each activity and only release the next sample if its resource requirements match the available slots in the future. If there is no match, the time slice is skipped.
This simulation provides performance statistics to compare alternate algorithms and hardware tradeoffs. For example,
1) Activity blocks provide utilization statistics (each activity’s average capacity). Bottlenecks and excess capacity may be optimized at each block to achieve a desired average throughput at the lowest cost.
2) More sophisticated algorithms can be investigated. For example, one that could search the sample list to find a sample which can run in the next time slot with, we expect, less hardware capacity and lower product cost.
3) Hardware tradeoffs and what if scenarios can be easily evaluated like:
a)    Must more than one sample be transferred at each time step?
b)    How many transfer mechanisms are required?  
c)    If the duration times need not be exact, how few transfer mechanisms are required to ensure the durations are within say, 5% of nominal.
d)    And how much capacity (cost) reduction does that provide?

As with most sophisticated tools, results are highly dependent on the experience of the user. An expert modeler applies the appropriate level of detail to each objective from architecture selection through refinement, concept verification and generation of design input specifications. It is advantageous to field a dedicated, experienced staff of specialists to ensure the level of expertise required for the best results. As with all specialties, frequent use typically leads to greater skill.

Leveraging Off-the-Shelf Technologies

We all feel the demands for shorter product development cycles to reduce time to market and lower development costs. Medical manufacturers and their outsourced development and manufacturing partners are finding that one viable option is to leverage off-the-shelf (OTS) technologies.
OTS components and systems can reduce the cost of design as long as the feature sets meet design requirements. Many times an OTS solution does not meet the cost target. There are many parts in an IVD platform. Incorporating multiple vendors’ hardware and software OTS products can be challenging and costly from an integration, verification, and validation standpoint.
KMC Systems has experience with many vendor-based OTS products and has successfully integrated them with custom solutions. We have developed a family of KMC OTS solutions that provide scalable, flexible modules for diagnostic instruments with different footprints and configurations across many chemistry processes and detection technologies. The modules include all the major elements (positioning, motion control, and robots; drive and control electronics; control, data processing, and GUI software; liquid handling, precision fluidic control and dispense) needed to construct a complete platform. But the real value of using OTC products from a vendor such as KMC is that the designs are proven and verified, reducing custom development costs as well as the subsequent module test and integration costs.
New value propositions always address the “bigger-better-faster-cheaper” principle. For IVD instrumentation platforms, that translates to “smaller-more features-faster-cheaper.” Unique product features requiring custom design are needed to provide product differentiation and address user needs. However, these custom features can still be implemented in a shortened schedule with reduced cost if off-the-shelf products, preferably with a proven and verified design, are leveraged as part of the total solution.

 

Spencer Lovette is program manager at KMC Systems, an Elbit Systems of America Co. He can be reached at Spencer.Lovette@elbitsystems-us.com.
 
Jack Kessler is principal systems engineer at KMC Systems, an Elbit Systems of America Co. He can be reached at
Jack.Kessler@elbitsystems-us.com.
 
Jerry Sevigny is principal systems engineer at KMC Systems, an Elbit Systems of America Co. He can be reached at
Jerry.Sevigny@elbitsystems-us.com.

 

 


Your rating: None Average: 2 (1 vote)

Login to post comments