Benchmarking and Other Stuff

From: Amit Bagchi (Clemson University)
Date: Tuesday, July 26, 1994

From: Amit Bagchi (Clemson  University)
To: André Dolenc (Helsinki University of Technology)
Date: Tuesday, July 26, 1994
Forwarded to RP-ML by André on 94 08 15
Subject:       Benchmarking and Other Stuff
Date:          Tue, 26 Jul 1994 10:40:04 EST5EDT

...stuff deleted...

I am also sending you my response to the questions you have raised on 
benchmarking (my response is within [] marks). I have been working on 
this for the past year with one of my graduate students, Mr. Dureen 
Jayaram (e-mail address: djayara@eng.clemson.edu).  We have built our 
own part design, and had Ford Motor Co. to build it for us.  
Currently we are making measurements on the parts and we will present 
the work in its entirety in Mr. Jayaram's MS thesis (expected 
December 1994), and in two papers -- one at the Austin Conference 
(1994), and one possibly in the Dayton Conference (1995).  If you 
wish you may contact Mr. Jayaram directly through e-mail.  I have 
already sent him a copy of your mailing and asked him to respond if 
he so wishes.

Amit Bagchi
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>
>                        BENCHMARKS
>
>My interest in this topic is to learn what is it that people want to measure
>in a benchmark in RPT, and is it being done properly. I will share with you
>what
>I've learned about benchmarks in a database seminar by Jim Gray, and the
>material below was taken from the material distributed and my personal notes.
>At the end, I have a few questions.
>
>1. The importance of benchmarks
... stuff deleted ... (see original posting)
>      - ad infinitum.

    [I agree with thesse comments.]


>
>It is my opinion that all this applies to RPT with some minor modifications.
>The first questions I have are:
>
>1. Is there a clear understanding on how to benchmark RP processes?

    [I do not think so.  The problem is caused by the fact that RP is 
a combination of processes, materials and operator expertise.  The 
process of machining or casting, for example, has evolved over the 
past several hundred years, and there is enough "knowledge" about it 
that it can be "standardized".  In the case of rapid prototyping, 
most of the understanding is by trial and error.  Thus a clear 
understanding of the benchmarking process is hard.  However, if one 
looks at the basics, the question is simple:  "Can it do my part?"  
The issues are also straightforward: (i) surface finish; (ii) 
warpage; (iii) dimensional accuracy; (iv) repeatability; (v) 
mechanical property; (vi) others.  
    If we also keep in mind that at this time we cannot 
decouple the effect of (i) machine; (ii) people; and (iii) material, 
then we should treat it as a coupled problem, and strive to use the 
same material, and perhaps with a proper human factors experiment 
have the same person operate twodifferent machines (this operator 
issue is true in all machining operations, for example, other than 
CNC machines).  
    We should also see if the people in the machine building field 
will allow the details of the process model to be disseminated in 
public. Surely then some of the researchers can incorporate them into 
process models and carry out parametric studies through analysis, which will be less tedious than carrying out umpteen number of experiments.  However, I do not feel that the decoupling effect is 
going to happen in the near future.] 


>2. Is there a consensus on how to benchmark RP processes? 

    [ I do not think that there is one now.  But people are talking 
and thinking about it.  That is how our project with Ford got 
started.  Ford and Clemson people got talking about how difficult it 
was to get any comparative study on the rapid prototyping 
technologies, and the few that are in the public domain are either 
too focused to promoting their own technologies, or are not detailed 
enough.  The overall goal of our project is to provide some insight 
into the processes so that the users can compare the different 
technologies based on simple manufacturing basis as I have outlined 
above.  Both Ford and Clemson are committed to putting our work and 
our thoughts and data publicly in the proper circles.    If a group is formed to develop a benchmarking 
standard, I will be happy to contribute to that group and be a member.]

>3. If not, isn't time we knew how to do this? 

    [Sorry, I answered this in the previous question.  I do not know 
if it was time for us to know to do this, as we understand so little 
about the materials and process (leave aside operator skill).  But, I 
agree with you that it is time to start working on a blueprint on it.]

>4. The test objects being used in benchmarks should be publicaly available. 

    [I agree.  Again, there should be standards.  For example, should 
we use STL, or IGES, or PDES, or other styles?  Perhaps we should 
have a standards sub-group to look into the whole issue and then 
approach both NIST in the USA, and other organizations such as ISO 
world wide to accept these standards -- for machines, materials, and 
benchmarking parts and processes.]

>   Are they easy to obtain? How does one obtain them?

    [Good question!!  If you want, we will be happy to send you our files as stl files.]


Previous message | Next message
Back to 1994 index