Advantages of Quantisweb in New Product Development Environment
By William Blasius
“Voice of the Customer” Advisor to Quantisweb Technologies
A Quantisweb Technologies White Paper
January 2018
As a seasoned product and process development professionals, experienced in discovery/screening projects and subsequent pilot scale up, you understand the importance of using Design of Experiments methodology to avoid the trap of one factor at a time, OFAT, experimentation. You have seen it a hundred times over, where a co-worker decides he is only going to change one variable to get to an optimal solution quickly. That one variable turns into two, then three, then four. Without a solid plan, any chance of capturing interactions between the variables is squandered. All hope is eventually lost for the optimal solution and the data gathered is useless for plugging back into standard design of experiments software. Eventually, executives in Sales & Marketing and Management tire of waiting for the promised results and decide to push the new product out to customers or risk losing potential share by being the last in the market. If things go well, the product is good enough and sells nicely to receptive customers. If things don’t go well, the company’s brand reputation takes a negative hit for introducing a product that does not live up to its marketing claims.
A significant step up from one factor at a time fiddling is standard design of experiments. DOE as we know it, has its foundations in the 1920’s and was a brilliant innovation that ushered in a new era of agricultural research that spread into the chemical industry in the 1950’s. The concepts that drive the rigid statistics behind DOE also become its limiting factor in a modern industrial R&D environment. The number of trials required to meet statistical rigor increases exponentially with the number of variables. One of the most useful aspects of DOE, response surface visualization analysis becomes merely conceptual beyond three variables forming a three-dimensional cube. The typical corporate response to the confusion created around all the DOE named design options, has been to deploy statisticians to help the experimenters. This assistance usually means forcing some trials into formulation or process zones that will fail for the sake of orthogonal symmetry. Given that three variables will lead to 8 trials, four variables takes 16 trials, five variables need 32 trials and six variables calls for 64 trials for a full factorial design, it is easy to see how quickly an experimenter’s best intentions at covering as many variables as possible can become overwhelming. Assuming a conservative average cost of $455 (unburdened 7 hours at $65 per hour) for a single day laboratory scale trial, the costs run up from $3,640, $7,280, $14,560 to $9,120 for 3, 4, 5 and 6 variables at only two levels. Table 1 below provides a graft showing the efficiency of Quantisweb in relation to the number of trials considered. What can an experimenter do with an eight component formulation running through a twenty-variable process? Running 2 raised to the 28th power, or 268 million trials at a cost of $122 Billion, is out of the question. Screening designs, cutting that number in half or quarters or tenths, hardly makes a dent on the impossibility of investigating all the variables at the same time. Chunking the problem into manageable groups of three variables at two levels is a reasonable solution with the advantage of being able to tease out some potentially significant interactions. Someone will still need to decide that at least a few variables are insignificant and can be ignored based on gut feelings or rules of thumb with no guiding data. The long-term issue with chunking is that in a resource stretched environment, no one will willingly give up a set of experiments to try a formulation ingredient or process condition that has never been tried before. And isn’t that where significant innovations come from? Untried combinations in the discovery phase of a project can mean the difference between delivering a nice incremental improvement and creating a market changing innovation before the competition makes your product obsolete.
Table 1: Efficiency of Quantisweb chart.
Quantisweb Technologies has been helping development professionals with formulation and process improvements and innovations. The heart of the Quantisweb software is an integrated optimization package that combines Design of Experiments with a weighted ranking system called the Analytical Hierarchy Process (AHP) and a statistical means of making choices in an environment of uncertainty known as Decision Theory (DT). This combination, powered by machine learning has allowed the development of a next generation design, analysis and optimization tool for experimenters to utilize in a variety of development environments.
Thanks to the ever-increasing computing power available, the Quantisweb DOE function has evolved from its turn of the 20th century orthogonal arrays (paired data point comparisons) origins to cutting edge stochastic approximation mathematics which is an iterative means for modeling extremely complex problems with multiple unknown parameters. This has become what is now known as the minimal dynamic Design of Experiments, mdDOE, platform. The software is configured to utilize up to 200 variables to optimize up to 100 outputs. In a standard DOE system, that would be 2 to the 200th power of trials which equals 1 with 60 zeros after. With Quantisweb, that number is reduced to the number of variables plus one, or 201 trials at the software’s current maximum. The number of required trials would be similar to a Taguchi or Plackett-Burman experimental design, except that these two methods only optimize one output at a time where Quantisweb optimizes all outputs at once. The big difference is that with stochastic approximation, trials can be clustered in areas that the experimenter is most interested in. The software deals with uncertainty; you do not have to fill in all the blanks with certainties or certain failures. From the beginning, experimenters have as much control of the process as they are comfortable with. After completing the trials, the AHP and Decision Theory components allow the experimenter to take the behavioral laws generated from the trials and to optimize the inputs (formulation ingredients, process conditions) based on importance/desirability weightings assigned to the different outputs. The input variables can be numeric, named or conditional and in combinations. Conditional boundaries can be set up so that Ingredient X cannot be used at its maximum level if Ingredient Y is in the formula or so that RPM’s must exceed A if Feed Rate is above B. Reduced to its simplest form, the software accommodates the way that creative experimenters think rather than forcing experimenters to think like statistical software.
A very important aspect of the Quantisweb methodology is the inherent agility built into the process. In the Fuzzy Front End of the innovation process, the target goals are assumed, untested and often anecdotal. Using Quantisweb early in the feasibility/discovery phase, a rough model can be developed in a minimal number of trials. Rapidly developed prototypes developed based on early goals can be beta tested with potential customers to see whether those goals are being met or whether those goals were, in fact, the appropriate targets. If the goals need to shift based on customer input, further work can retain the base knowledge of the first round to increase data density and model accuracy. The Quantisweb methodology is not just iterative, it is cumulatively iterative. That is the true value of a machine learning system over a discrete paired comparison software product found built into standard DOE programs. The software can even be used to put some meaning to the OFAT efforts of your coworkers, albeit with a few experiments to fill critical gaps if necessary. Essentially, you would be using the data-mining potential of the software in an R&D environment.
In summary, Quantisweb Technologies has taken the experimental discipline developed by R.A. Fisher in the 1920’s and refined by Genichi Taguchi in the 1980’s and applied mathematical processes to that foundational work that are only now commercially practical because of the continued expansion of computing speed and power. Combining contemporary mathematics with Analytical Hierarchy Process and Decision Theory drives Quantisweb software that minimizes trial commitment and maximizes agility while delivering behavioral laws that build in robustness with each iteration. The Quantisweb Technologies’ goal is to help you develop and optimize your new products and processes as quickly and robustly as possible so you can hit the market first, with products you can be proud of.