Skip navigation
Page Description

The following page is a blank template with a header that contains a quicklinks jump menu and the search UHD function. Page sections are identified with headers. The footer contains all required links, contact and emergency information.

Advanced Distributed Simulation Research Consortium

Parallel and Distributed Evaluation, Visualization, and AI Reasoning to Advanced DIS Technology

 

PROGRESS REPORT 1998

 

 

Army Research Office

Grant No. DAAH04-95-1-0250

Grambling State University

Department of Mathematics and Computer Science

Grambling, LA 71245

Table of Contents

1. Project Progress Reports from Task Teams

1.1 ADS Architecture Design and Evaluation (ADS-ADE) Task 2 - 18

Coordinator: Ratan K. Guha

1.2 ADS Visualization and Synthetic Environment (ADS-VSE) Task 19 - 32

Coordinator: Patrick Bobbie

1.3 ADS Knowledge Base Systems (ADS-KBS) Task 33 - 48

Coordinator: Richard Al󼯰>

1.4 Student Outreach and Training Program Task 49 - 53

Coordinator: Muddapu Balaram

2. Publications, Presentations and Personnel

2.1 ADS Architecture Design and Evaluation (ADS-ADE) Task 55 - 57

Coordinator: Ratan K. Guha

2.2 ADS Visualization and Synthetic Environment(ADS-VSE) Task 58 - 59

Coordinator: Patrick Bobbie

2.3 ADS Knowledge Base Systems (ADS-KBS) Task 60 - 62

Coordinator: Richard Al󼯰>

2.4 Student Outreach and Training Program (ADS-KBS) Task 63 - 68

Coordinator: Muddapu Balaram

3. Visits by ADSRC Researchers

3.1 Visits by ADSRC Researchers and Students 70

 

Appendix: Publication Abstracts 71 - 89

 

 

 

 

 

 

 

SECTION 1.

 

PROJECT PROGRESS REPORTS

FROM TASK TEAMS

1.1 ADS Architecture Design and Evaluation (ADS-ADE) Task

Task Coordinator: Ratan K. Guha

OVERVIEW

Objectives and Significance:

The main objectives of the research performed by the ADS Architecture Design and Evaluation team are: (1) to develop techniques to model/evaluate the performance of large scale ADS systems, (2) to develop algorithms, based on DoD High Level Architecture (HLA), for improving the scalability and reducing the bandwidth requirement of distributed simulation systems, and (3) to improve solution techniques used by tools for the modeling and performance evaluation of DIS systems.

Specifically, our objectives in the general area of research for HLA are to (1) examine various approaches to relevance filtering in the context of the HLA and devise alternative algorithms that can improve their execution efficiency and filtering efficiency, (2) develop better performance analysis methodologies to fully determine the performance characteristics of the various data distribution approaches, (3) propose changes to current HLA specifications to incorporate objects fidelity requirements in its data distribution management services, and (4) investigate methods for efficiently implementing the new services.

Regarding implementation issues for large distributed systems, our objectives of research are to (1) develop fundamental concepts in distributed mutual exclusion for efficiency and fault tolerance capabilities of the system, and (2) design schemes for channel assignments and handoff handling in mobile cellular networks.

For general purpose tools, our objective of research is to develop easily accessible interval computation software which can reliably perform fundamental interval arithmetic, set operation and bound elementary interval functions.

The successful implementation of efficient relevance filtering in network gateways would help solve one of the challenges facing the design of highly scalable DIS and HLA systems. In HLA, the Data Distribution Management is the RTI component responsible for increasing the scalability of the training exercise and reducing the traffic among federates. Filtering is obtained by allowing federates to declare and use routing spaces and custom filtering schemes. Improving the effectiveness of filtering greatly improves HLA scalability. The development of fundamental concepts for implementing large distributed systems would solve many problems facing the efficient and robust implementation of scalable and HLA systems.

Army Relevance:

The research on filtering and routing spaces is crucial to the scalability of HLA currently being developed under the auspices of the Defense Modeling and Simulation Office as the DoD-wide standard for simulation. The results of this research have provided good insight into the behavior of relevance filtering schemes, their real-time performance and their reliability. With this insight, the development of reliable and efficient filtering methods is greatly enhanced.

Wireless and mobile communication technologies will continue to play increasingly important role in the various functions of armed forces. The development of efficient channel assignment and handoff protocols is of one of the critical problems facing the designers of third generation wireless systems. Our research addresses the real-time performance issues of handoff protocols and seeks to develop efficient channel assignment and handoff schemes suitable for the delivery of UAV-collected imagery to/from mobile tactical vehicles and the delivery of location-based video/audio data to mobile users. Efficient handoff designs will also help the effort to incorporate live vehicles in military simulation training exercises.

For synchronization in a distributed system, the special class of non-token-based algorithm, known as the quorum based approach, requires each process to only exchange messages with a subset of specific nodes in the distributed system. The quorum-based approach is attractive due to its well-defined concept, low message overhead, and its capability to tolerate both the site and link failures. The development of this concept will contribute to the efficiency and fault tolerance capabilities of HLA systems.

Accomplishments:

Several teams have been formed to work on several sub-tasks. Areas of accomplishments include relevance filtering for large-scale simulation exercises, HLA data distribution management, distributed object standards, real-time protocols for channel assignments and handoff handling in mobile cellular environments, triangular level quorums for distributed mutual exclusion, an online interval calculator, and a parallel global optimization software tool.

Planned Activities:

The planned activities of each ADS-ADE sub-task are described in the individual reports given on the following pages.

 

Design and Evaluation of Data Distribution Algorithms in the HLA

A. Berrached, M. Beheshti, O. Sirisaengtaksin (UHD)

M. Bassiouni (UCF)

 

Research Objectives and Significance:

The High Level Architecture (HLA) provides a set of services to facilitate the explicit control of data distribution. However, the performance of HLA data distribution depends on the actual algorithm used to implement those services. The objectives of this project are to (1) to examine various approaches to relevance filtering in the context of the HLA and devise alternative algorithms that can improve their execution efficiency and filtering efficiency, and (2) develop better performance analysis methodologies to fully determine the performance characteristics of the various data distribution approaches.

Army Relevance:

The HLA, designated as the standard architecture for distributed simulation systems, has been supported by DMSO since its inception. Efficient and effective data distribution within this standard architecture is crucial to its scalability and future applicability. This project addresses issues that directly impacts the efficiency of data distribution in HLA.

Methodology:

We developed simulation programs in C/C++ to investigate the performance of various approaches to relevance filtering in the context of the HLA. We studied the fixed-grid approach, which has been used in a number of current HLA implementations. This is the simplest approach which incurs the least overhead, and is highly scalable. However, it is the least effective in terms of its filtering performance. Because the grid-layout is predefined statically and remains fixed throughout the simulation, it results in inefficient utilization of multicast groups and relatively low reductions of irrelevant data traffic. Based on these results we devised a multi-resolution grid-based approach that allows the grid layout to be re-configured to match the distribution of objects attribute values in the routing space. Our evaluation study shows that this approach achieves better filtering, especially when the number of multicast groups is relatively limited. The filtering improvement is achieved at the cost of dynamic grid reconfiguration. One of the main conclusions from our study, is that in the context of large scale simulations (large number of federates and large number of entities) it is difficult to device one single optimal algorithm that satisfies the needs and capabilities of all federates. We formulated a hierarchical scheme that allows filtering to be done in a sequence of stages with increasing levels of accuracy as data moves in the network hierarchy from sender to receiver. An important aspect of this approach is that it provides a framework for partitioning a large scale federation into a hierarchy of smaller "sub-federations" and using different algorithms at different levels of the hierarchy according to the specific needs and capabilities of each "sub-federation".

Accomplishments:

The following tasks have been accomplished

  1. Developed simulation programs in C/C++ to investigate various approaches to relevance filtering in distributed simulation systems.
  2. Devised new approaches to that promise to improve the performance of data distribution management of High Level Architecture (HLA) distributed simulations. These include a hierarchical grid-based approach and a dynamic/variable resolution grid based approach.

     

  3. Evaluated the performance of the new approaches under various simulation parameters and compared it to the traditional fixed-grid based approach that is employed in current implementations of the HLA.

     

Planned Activities:

We plan to expand our work to the following areas:

     

  1. Investigate non-gridded approaches. In particular, we plan to investigate region clustering techniques. Region clustering is based on grouping objects based on the proximity of their interest regions in a routing space. Region clustering techniques have the potential of achieving near optimal filtering performance. However, their efficiency, in terms of overhead cost, and scalability remain questionable and need to be further investigated.

     

     

  2. Extend the hierarchical approach to incorporate non-gridded algorithms. In particular, the interfaces between the different levels of the hierarchy need to be further developed to provide correct and efficient interfacing between the "sub-federates" of the hierarchy.

     

     

  3. Develop better performance analysis methodologies to fully determine the performance characteristics of the various data distribution approaches. In our previous studies, synthetics simulation was used to evaluate different filtering algorithms. We plan to use trace-driven simulation based on traces generated from actual scenario simulations.

     

Real-time Protocols for Channel Assignments and Handoff Handling

in Mobile Cellular Networks

M. Bassiouni, C. Fang and M. Chiu (UCF)

Research Objectives and Significance:

The efficient delivery of real-time data to mobile users is one of the challenges facing third generation wireless networks. Meeting this challenge requires advances in several technology areas such as increasing the retention of battery power, reducing channel interference, and maintaining good quality-of-service (QoS) during mobile handoff. Our research focuses on the problem of designing channel assignment and reassignment schemes that can support real-time connections and maintain good QoS during mobile handoff. Our objective is to design schemes that achieve:

i) Low probabilities for handoff dropping and new call blocking

ii) Low handoff delay, i.e., reduced computational and message exchange overheads

 

Army Relevance:

Wireless and mobile communication technologies will continue to play increasingly important role in the various functions of armed forces. The development of efficient channel assignment and handoff protocols is of one of the critical problems facing the designers of third generation wireless systems. Our research addresses the real-time performance issues of handoff protocols and seeks to develop efficient channel assignment and handoff schemes suitable for the delivery of UAV-collected imagery to/from mobile tactical vehicles and the delivery of location-based video/audio data to mobile users. Efficient handoff designs will also help the effort to incorporate live vehicles in military simulation training exercises.

 

Methodology:

In cellular systems, the service region is divided into several coverage cells and each cell has a base station. Base stations are allocated radio frequency channels to service the mobiles in their cells. When the mobile crosses the boundary from one cell area to the other, the connection with the current base station is terminated and a connection with the new base station is established. This process is called handover or handoff . The first task of our research deals with wireless handoff designs for an important segment of future wireless networks, namely, highway cellular networks. Our scheme uses a cellular architecture with linear topology in which the radio channels used in a given cell cannot be simultaneously used in the two neighboring cells to the left and to the right of this cell.

Two important considerations for real-time traffic during handoff are

a) If the base station in the new cell does not have a free channel to service the new connection, the handoff request will be declined (blocked) resulting in the premature termination of the connection (call). To solve this problem, channel allocation and reallocation schemes must be carefully designed in order to better utilize the scarce resource of radio spectrum (channels).

     

  1. If the handoff does not occur quickly, the QoS of the real-time connection may degenerate below an acceptable level .

     

The majority of the work on channel allocation and reallocation in cellular networks has concentrated on the first issue mentioned above, namely, achieving better utilization of the radio frequency in order to reduce call blocking. Below, we briefly describe our approach for achieving low blocking rate and reducing handoff delays.

Let M be the list of all available channels and assume these channels are assigned integer indexes starting from 1. Thus M represents the following ordered list

M = {1, 2, 3, ?, |M|}

The set M is partitioned into three ordered sublists of equal size, or nearly equal size, as follows:

M = M1 ȼ/font> M2 ȼ/font> M3

To simplify the discussion, we shall assume that the three sublists have equal size, i.e., |M|=3m for some integer m. Let 2 denote the unary operator that reverses the order of elements in an ordered list. Thus we have

M1 = {1, 2, ?, m} 2 M1 = {m, m-1, ?, 1}

M2 = {m+1,m+2, ?, 2m} 2 M2 = {2m,2m-1, ?, m+1}

M3 = {2m+1, 2m+2, ?, 3m} 2 M3 = {3m, 3m-1, ?, 2m+1}

The three sublists are assigned as the nominal sets of channels for successive cells in alternating fashion. Each sublist is toggled using the 2 operator after each assignment. For example cell K-3, cell K, and cell K+3 all have the same set of nominal channels assigned to them but the perceived order of these channels in the three cells are M1, 2 M1 and M1, respectively. Similarly, cell K-2 is assigned M2, cell K+1 is assigned 2 M2, and cell K+4 is assigned M2.

The initial assignment used in our scheme differs from other known algorithms in the literature in that the nominal channels of adjacent cells do not follow the regular compact pattern for avoiding channel interference. For the channel reuse distance under consideration, the compact pattern strategy would partition the channel spectrum to only two subsets and alternatively assign these subsets to adjacent cells.

Each base station maintains three ordered lists: self list S, left-neighbor list L, and right-neighbor list R. The list S has the nominal channels of this base station in the same order produced by the assignment process described above. The R and L lists of a base station are inverted copies of the S lists of its right neighbor and left neighbor, respectively.

The above initial allocation of channels has enabled us to develop channel assignment and reassignment algorithms having the following properties

1) The scheme has O(1) time complexity per channel request and channel release. All computations for channel assignment, reassignment, borrowing, and release have constant time execution requirement.

     

  1. The scheme has a communication cost of 2 messages per channel request or channel release.

     

     

  2. Assuming channels are encoded as integer indexes without gaps, the storage overhead of the scheme at each cell can be reduced to O(1) without affecting the time complexity.

     

Accomplishments:

We have developed a scheme that seems promising in reducing the blocking probability, computational cost as well overhead of message exchanges for highway cellular networks. Compared to the schemes previously proposed in the literature, our scheme is more suitable to handle connections serving real-time VBR and VCR traffic. This is because the scheme greatly simplifies the search process, avoids expensive computation to evaluate the cost function for each channel in the availability list, reduces the overhead of message exchanges among base stations, and maintains low blocking probability of new and handoff requests. The preliminary performance tests of this scheme are encouraging.

Planned Activities:

We plan to perform additional tests to investigate the impact of certain modifications (e.g., introducing the Guard Channel policy and Fractional Guard Channel policy) in our scheme as well as the impact of using intracell handoff to release a borrowed channel when the terminated connection releases a nominal channel.

We also plan to extend our research to the case of two-dimensional cellular networks. Our focus will be to develop predictive channel reservation for mobile cellular networks based on GPS measurements. Using GPS and dead-reckoning, each mobile can trace its path and report its movement to its current base station. This base station extrapolates the path of the mobile and initiates a channel reservation request to the neighboring cell that the mobile is heading to.

Relevance Filtering for Scalable DIS/HLA Systems

Y. B. Reddy, M. Balaram (GSU); M. Bassiouni, R. Guha, U. Vemulapati (UCF);

A. Berrached, R. Al󠨕HD)

 

Research Objectives and Significance:

The goal of our research in this area is to examine the design of relevance filtering schemes, analyze their reliability, evaluate and improve their filtering efficiency by using various techniques. The successful implementation of efficient relevance filtering in network gateways would help solve one of the challenges facing the design of highly scalable DIS and HLA systems. In HLA, the Data Distribution Management is the RTI component responsible for increasing the scalability of the training exercise and reducing the traffic among federates. Filtering is obtained by allowing federates to declare and use routing spaces and custom filtering schemes. Improving the effectiveness of filtering greatly improves HLA scalability.

Army Relevance:

The research on filtering and routing spaces is crucial to the scalability of HLA currently being developed under the auspices of the Defense Modeling and Simulation Office as the DoD-wide standard for simulation. The results of this research have provided good insight into the behavior of relevance filtering schemes, their real-time performance and their reliability. With this insight, the development of reliable and efficient filtering methods is greatly enhanced.

Methodology:

An HLA routing space is a multidimensional coordinate system by which federates can implement a filtering mechanism without requiring the RTI to have complex domain specific knowledge. The three components of a vector location are used as the three variables of a three-dimensional routing space. We have used a circular region of interest to determine the relevancy of transmitted messages and implemented filtering algorithms at the sending receiving gateways. Both analysis and simulation have been used to study the performance and reliability of relevance filtering algorithms.

Accomplishments:

Filtering at transmission and filtering at reception are two important requirements in all DIS exercises. Algorithms for distance based filtering and grid based filtering was developed and used in local and gateway dead-reckoning. There are other approaches to filter the irrelevant messages. One of the methods is calculating the fitness of the entity using genetic algorithm approach. The approach provides better results because relevance of the message normally depends upon type of entity, position of entity, direction it moves, and other characteristics. The total characteristics are coded into blocks of bits and test for fitness. The messages are delivered to the best-fit entities. The GA method is faster and more accurate. Some work has been done using GA approach. Other approaches in progress include:

     

  • Mapping the entities located in the same terrain region to hosts located in the same LAN helps to localize the set of hosts that would need to receive state update messages from each entity.

     

     

  • Grid based data distribution.

     

     

  • Estimating the entity position using previous state and related information.

     

Planned Activities:

The group demonstrated progress and expected to accomplish in the following topics during the next two years.

     

  • Two stage filtering scheme by adapting it to the routing space mechanism of HLA, investigating the impact of multicast algorithms on its performance and performing reliability analysis of filtering in HLA.

     

     

  • Grid-based data distribution schemes in the HLA and comparison to other methods.

     

     

  • Circular cell approach to calculate relevance of the entity in the gateway (federation).

     

     

  • The GA methodology includes building blocks approach to find the fitness of each entity of gateways and filters appropriately. Genetic algorithm has built in parallel search property. The processing is done much faster and more accurately. Progress is now towards classifier systems to predict the best-fit entities at a given time. The algorithms will be extended to distribute the messages to selected federates. Since parallelism is built in property of GA and other approaches can be coded to GA parameters, the GA approach will be more popular for RF.

     

Triangular Level Quorums for Distributed Mutual Exclusion

R. Guha and J. Chu (UCF)

 

Research Objectives and Significance:

Mutual exclusion plays an important and fundamental role in the design of distributed systems. Mutual exclusion algorithms are classified into two categories: token-based and non-token-based (or permission based). In the token-based approach, a privilege message called a token is maintained in the distributed system to control the access to the critical section. A token-based algorithm has a low number of messages to be exchanged to enter a critical section, but the recovery procedure is an expensive task if the token is lost. In non-token-based approach a node wishing to enter its critical section normally has to exchange messages with all nodes in the distributed system. For a distributed system with a large number of nodes, this approach becomes impractical due to the fact of exchanging messages with all nodes. On the other hand, a special class of non-token-based algorithm, known as quorum based approach, requires each process to only exchange messages with a subset of specific nodes in the distributed system before entering its critical section. The specific subset is called a quorum. The set of quorums is called a coterie. The quorum-based approach is attractive due to its well-defined concept, low message overhead, and its capability to tolerate both the site and link failures. In the quorum-based approach, among many desirable criteria, the quorum size and and coterie size are two basic issues in the design of quorums. The objectives of this project are to (1) to develop a new approach to generate relatively small and equal-sized quorums, called triangular level quorums, and (2) study the properties satisfied by the triangular level quorums.

Army Relevance:

The HLA, designated as the standard architecture for distributed simulation systems, can only be implemented as a large distributed system. For a system like this, it is desirable to have the quorum size as small as possible to enter a critical section. On the other hand, to increase the fault tolerance capability, the coterie size should be as large as possible. This project addresses issues that influences the efficiency and fault tolerance capabilities in the implementation of HLA.

Methodology:

We have developed the concept of triangular level quorums as follows. Let U = {1, 2, ?, N} be the nodes in a distributed system. Let the m sets S1, S2, ?, Sm be subsets of U such that Si Ǽ/font> Sj = Ƽ/font> for 1 ? i ? j ? m, | S1 | = 1, | Si | = | Si-1 | + 1 for 2 ? i ? m, and U = S1 ȼ/font> S2 ȼ/font> ? ȼ/font> Sm. The m-level structure S1, S2, ?, Sm forms an equilateral triangular structure, and the level coterie constructed from the triangular structure is called an m-level triangular level coterie, denoted as D m. Because |S1| = 1, and | Si | = | Si-1 | + 1 for i = 2, 3, ?, m, we have | Si | = i for i = 1, 2, ?, m.

Let U={1, 2, 3, ?, N} be the N nodes in a distributed system. The N nodes are logically arranged into m levels S1, S2, ?, Sm, where Si are non-empty subsets of U. Si Ǽ/font> Sj = Ƽ/font> , for all 1 ? i ? j ? m, and U = S1 ȼ/font> S2 ȼ/font> ?ȼ/font> Sm. If | Si | = i for 1 ? i ? m, then the ND level coterie D m = Q1 ȼ/font> Q2 ȼ/font> ? ȼ/font> Qm is called an m-level triangular level coterie, q μ/font> D m is an m-level triangular level quorum, where Qi, 1 ? i ? m, are defined as

 

m

Q1 = {S1 ȼ/font> ( ȼ/font> {Xj}) | Xj μ/font> Sj, j = 2, 3, ?, m}

j=2

m

Q2 = {S2 ȼ/font> ( ȼ/font> {Xj}) | Xj μ/font> Sj, j = 3, 4, ?, m}

j=3

?

m

Qi = {Si ȼ/font> ( ȼ/font> {Xj}) | Xj μ/font> Sj, j = i+1, i+2, ?, m}

j=i+1

?

Qm = {Sm} 

An Example : let U = {aij | 1 ? i ? 5, 1 ? j ? i}, Si = { aij | 1 ? j ? i}, 1 ? i ? 5. The nodes are logically arranged into a 5-level equilateral triangular structure as depicted in figure 2. By definition 3, the following are some instances of the 5-level triangular quorums constructed from Si, i = 1, 2, ..., 5.

q1 = {a11, a22, a32, a43, a55}, q2 = {a21, a22, a31, a43, a54}, q3 = {a31, a32, a33, a45, a53},

q4 = {a41, a42, a43, a44, a52}, q5 = {a51, a52, a53, a54, a55}, q6 = { a11, a21, a31, a41, a51}.

It is clear that qi Ǽ/font> qj ? Ƽ/font> , and qi ˼/font> qj for 1 ? i ? j ? 6.

Note that | q1 | = | q2 | = ? = | q6 | = 5.

Triangular Structure Level Size of Level

O 1 |S1| = 1

O O 2 |S2| = 2

O O O 3 |S3| = 3

O O O O 4 |S4| = 4

O O O O O 5 |S5| = 5

Figure : A 5-level triangular structure to construct a 5-level triangular level coterie

From the m-level triangular structure and the definition of D m, any q μ/font> D m, q μ/font> Qi, for some 1 ? i ? m, the size of q equals | Si | + m - i = m. That is, any quorum q in an m-level triangular level coterie D m has the same size | q | = m. The number of quorums in an m level triangular level coterie D m is (m! + m!/2! + m!/3! + ? + m!/m!), we have m! ? | D m | < 2m!. Let the total number of nodes in an m level triangular structure be N, then N = m(m+1)/2. This implies that m = (2N - m)1/2 = ? [(8N+1)1/2 ?1] ? (2N)1/2.

 

 

Accomplishments:

The following tasks have been accomplished

     

  1. Developed a new approach, called triangular level coterie, to generate relatively small and equal-sized quorums.

     

     

  2. Proved that the generated coterie has many interesting properties, such as non-domination, equal-sized quorums, no corresponding vote assignment.

     

     

  3. Proved that the quorum availability of triangular level coteries converges as the number of nodes increases and satisfies the complementary property.

     

Planned Activities:

We plan to expand our work to the following areas:

     

  1. Investigate trapezoid level coteries. If the number of nodes is not m(m+1)/2 for some integer m, and equal-sized quorums are required, then we can form a trapezoid level coterie.

     

     

  2. Investigate incomplete triangular level coteries. If the number of nodes is not m(m+1)/2 for some integer m, another approach is to add dummy nodes to form triangular level structure, and then to form triangular level quorums. The dummy nodes are then replaced by the real nodes.

     

3. Develop the concept of coterie transformation. A coterie transformation is a function that generates a new coterie from a given coterie. It is expected that as a special case, the m-level triangular level coterie can be generated from (m-1)-level triangular coterie by this transformation .

 

 

Extensions to Data Distribution Management Services in the HLA

 

A. Berrached, R. Al󠨕HD)

M. Bassiouni, R. Guha, U. Vemulapati

(UCF)

M. Balaram (GSU), D. Williams (FAMU)

 

Research Objectives and Significance:

The objectives of this project are to (1) investigate the effects of objects fidelity requirements on the performance of HLA data distribution (2) propose changes to current HLA specifications to incorporate objects fidelity requirements in its data distribution management services, (3) investigate methods for efficiently implementing the new services, and (4) explore other extended services to DDM services.

Army Relevance:

This project addresses the efficiency and effectiveness of data distribution in the HLA. Results of this project can improve the performance and scalability of large distributed simulations.

Methodology:

One of the key features of the HLA is that it provides a set of services for the explicit management of efficient data distribution. Federates specify the requirements under which they are interested/willing to receive or send data updates, and the RTI attempts to match those requirements. One limitation of current HLA data distribution services, however, is that relevance filtering is done in an all or nothing fashion: if a publisher meets the conditions stated by the subscribing federates then every data update sent by the publisher is delivered to all the subscribing federates. The rate at which the subscribers receive data updates is controlled exclusively by the publisher of the data.

We define fidelity for an object as the minimum rate at which an object needs to receive data updates to maintain an accurate and consistent view of the "virtual world". Simulation objects have specific fidelity requirements depending on the characteristics of the objects they simulate and their operational status. Incorporating the fidelity needs of the subscribing federates in the data distribution management services of the HLA can tremendously reduce the amount of irrelevant data traffic. The objective here is to incorporate fidelity with minimal changes to current HLA interface specifications and their implementations.

We propose to add intelligent control to the DDM component of HLA as well as new services at the RTI level. We propose history based parameter validation and prediction of invalid ranges of values in the specification of update/subscribe region calls. Federates will be warned if any of the specified parameters seem out of place. Currently, the dimensions, units and the number of attributes of a federation entity are defined in FED (Federation Execution Data). Federates specify the upper and lower bounds when they create the region by supplying appropriate unit-less values to the RTI. If a federate misinterprets the values of any attribute (in terms of units or in terms of how large/small they can be), there is no way for it to get a feedback from the owner of that attribute. We propose a mechanism that allows the parameters be validated (if requested).

Another service allows a federate to get the identities of other subscribing federates. Currently a federate that owns an attribute may associate an update region with it; all other federates that are interested in changes to that attribute will associate subscription regions defining their regions of interest. These intentions are conveyed to RTI through appropriate services (such as Update Region, Subscribe Region, etc). There are three possible scenarios --- for each update region, there may be zero, one or many subscription regions that have an overlap. Currently, if there are no other federates' subscription regions overlap with a specific update region, RTI informs the owning federate to turn-off actual updates (through Start Registration For Object Class call-back). This is done so that RTI need not be told of any changes to attributes that no one else is interested in. At the same time, RTI also informs an owning federate to turn-on updates as soon as there is at least one federate that has an overlapping subscription region (through Stop Registration For Object Class call-back). RTI also informs federates to discover objects/attributes if there is an overlapping update region. If there is more than one federate that overlaps with an update region, each of those federates will then be aware of the object. However, the owning federate will only know if there is at least one interested federate or not at any given time. There is currently no way for an owner to know all the other interested federates.

Finally we would like to add a generic ``adoptive federate'' that is ready to take any entity that some other federate would like to relinquish (orphan entities).

Accomplishments:

We proposed changes to the HLA data distribution services to incorporate fidelity based filtering. These changes consist of defining a normalized fidelity measure in each routing space and augmenting DM and DDM subscription service requests with an additional parameter that specifies the fidelity requirements of subscribing federates. The RTI defines a set of discrete fidelity levels in each routing space from which federate can choose and federates subscribe by selecting the fidelity level that best matches their needs. This scheme allows the new changes to be incorporated in HLA services with no modifications to the existing multicast protocols, except that separate multicast channels (addresses) need to be assigned to each of the fidelity levels to achieve the desired data distribution.

Additionally we proposed two new services (one for a federate to find the identity of other interested federates, another for validation of parameters). We also discussed the justification for these new services, and the ease of implementation within the current RTI structure. The scenario of building a social-service federate was also discussed in detail.

Planned Activities:

Fidelity based filtering has the potential of reducing the amount of irrelevant data traffic. However, it also incurs additional overhead cost on the RTI and the underlying network. We plan to evaluate, qualitatively and quantitatively, the costs and benefits of fidelity based relevance filtering on the performance of HLA data distribution. We plan to use trace-driven simulation based on traces generated from actual scenario simulations.

Compression and Security of Large Datasets

Patrick Bobbie, Deidre Williams, and Henry Williams (FAMU)

 

Research Objective and Significance:

The main objective of this project is to research efficient secure transmission methods. High volume of information is used to represent data for various computer transactions. Managing and/or securing these extremely large data sets can pose problems in storage space and processing time. Compression addresses the issues of manageability and encryption provides one of the most effective means of protection against common threats. The primary focus of this past year has been security.

Army Relevance:

Security is of paramount importance to the Army because of the sensitive nature of many documents. It is desirous to protect these documents from disclosure to unauthorized persons and modification by unauthorized persons. Compression is important because of the volume of documents. It is especially important to Distributive Interactive Simulation (DIS) because of the high volume of information (bits) in its various transactions. Many of the transactions, which involve images, can tolerate a fidelity loss without impeding their functionality. Thus, this leads to better real-time simulation with lessened storage space and processing time via compression.

Methodology:

Many encryption schemes were discovered through a literature search and more are being developed. The strength of any encryption scheme relies in its keys. The primary focus of this past year has been the McEliece public-key cryptosystem. This scheme is based on the theory of error correcting codes. In addition to its capability of offering secure transmission, it has been shown to possess the capability to correct communication channel errors. However, McEliece encryption scheme causes large data expansion.

Compression study was initiated as a possible method to counteract the data expansion due to McEliece encryption scheme. A technical literature search has indicated many research activities in such areas as data (text), image, and video. Currently, we have begun our investigation by reviewing digital signal processing principles, such as the FFT. We will also investigate other mathematical techniques such as wavelets.

Accomplishments:

Some "weak" keys have been identified which do not offer effective secure transmission. This has been incorporated into a key generation algorithm to safeguard against currently known "weak" keys. Also, a compression web-based tutorial was initiated to further enhance the knowledge of faculty, to develop a testbed of compression routines, and to ultimately introduce concepts to students.

An Online Interval Calculator

M. Hung and C. Hu(UHD)

Research Objective and Significance:

The objective of this project is to develop an internet accessible interval computation tool which can reliably perform fundamental interval arithmetic, set operation and bound elementary interval functions with an interactive graphical user interface. Interval computation is a relatively new technology. It can better model the real world, and provide reliable solution bounds for application problems. Most people are still not familiar with interval computing. Interval calculator is not available to most users. This project provides an easy accessible fundamental computation tool for people who interested in interval computation.

Army Relevance:

Interval computing technology, as a basic computation method, can be applied to almost all computations required by Army applications.

Methodology:

To make the software package internet accessible and platform independent, we implemented it as a Java applet. The language we used is JDK 1.1.6. We also used Microsoft Visual J++ 1.1 to implement the graphical user interface.

The entire package contains the following classes:

     

  1. Rmath Class: This class provides methods to perform directed rounding.

     

     

  2. RealInterval Class: This class defines the interval type for real intervals.

     

     

  3. IAMath Class: This class contains methods to perform basic interval arithmetic and set operations, and bound interval elementary functions.

     

     

  4. IAException Class: This class handles error messages.

     

     

  5. Cacpad and DisplayLayout classes: These two classes are for the interactive graphical user interface.

     

     

  6. I_Calculator Class: This class is the main program which integrates all of the above to form a functional online interval calculator.

     

Accomplishment:

The initial implementation of the interval calculator is now available through the internet. To access it, users may visit the URL: http://gauss.dt.uh.edu/IntCalculator/I_C.html. A draft of the user guide can be found in a link in the above URL as well.

Future Plan:

 

  • We will revise the implementations of some elementary functions by using true interval algorithm rather than using Java provided floating point methods.

     

     

  • We will add more features to the online calculator.

     

 

ParaGlobSol -- A Parallel Global Optimization Software Package

C. Hu (UHD)

Research Objective and Significance:

The objective of this project is to improve the performance of a generic non-linear global optimization interval software package named GlobSol through parallel/distributed computing. Non-linear global optimization is a fundamental research problem. With interval computing, GlobSol can reliably find global optimum in a given domain for many application problems. Our initial parallel implementation has significantly improved the performance of this package.

Army Relevance:

ParaGlobSol is a generic global optimization package. It can be used to solve optimization problems in the Army.

Methodology and Accomplishments:

 

  • Our initial parallel implementations were done on networked Sun Ultra workstations.

     

     

  • We use MPI (the Message Passing Interface) to distribute the workload to multiprocessors.

     

     

  • By using the backward compatibility of Fortran 90, we have developed an MPI working environment that supports interval computations with Fortran 90. The original package GlobSol was written in Fortran 90. However, MPI does not have a Fortran 90 implementation.

     

     

  • Our implementation is coarse-grained SPMD (Single Program Multiple Data).

     

     

  • Both static and dynamic load-balancing techniques are applied to improve the overall efficiency.

     

     

  • Super-linear speedup was observed for some test problems.

     

Future Plan:

 

  • Develop and Implement other parallel schemes

     

     

  • Improve overall efficiency and scalability of this parallel software package

     

     

  • Apply this package to solve some computation intensive Army related optimization problems.

     

 

1.2 ADS Visualization and Synthetic Environment (ADS-VSE) Task

Task Coordinator: Patrick Bobbie

OVERVIEW

Objectives and Significance:

The main objectives of the research performed by the ADS Visualization and Synthetic Environment team are: (a) To develop and evaluate algorithms for graphics and visualization; (b) To analyze the real-time performance of the algorithms; and (c) To tailor the algorithms to phenomenological (e.g., terrain, fire, smoke) datasets and distributed computations in synthetic environments using interactive visualization methods. Toward this end, the ADS-VSE team embarked on the following subtasks:

(1) A Computational Geometry Tutorial (CGT) system was developed to provide a distance learning capability for ADSRC-wide students and faculty in the study of basic concepts of computational geometry. The CGT system is continually enhanced to support curricular-oriented materials in computer graphics and visualization. The enhancement includes the introduction of Virtual Reality (VR) modeling component for animation, quality rendering, and visualization of graphic objects.

(2) A subproject focuses on the two-dimensional Ising model to represent and simulate the dynamics and behavior of fire and smoke in synthetic environments. Generally, we consider the elements of fire and smoke as a system of particles which undergo phase transitions, energy-level changes, and exhibit positional changes to accurately represent real-world phenomenon. However, real-time simulations of large systems with large two dimensional or three dimensional lattices require significant processing power. One thrust is to increase the performance of the simulation system by harnessing the overall computing power of the underlying distributed processors. The main goal is to improve the real-time performance of the Ising model using multiple processors. This subtask complements another Ising-related effort which focuses on the mathematical underpinnings and the physics of the model.

(3) In furtherance, a subproject is implemented for the design, development and implementation of efficient computer simulation of fire propagation, by extending the percolation model. The percolation approach has a well documented history in basic fire propagation behaviors through random media with the added effects of wind. An extension of this basic model to include topology, non-random media, moisture content and effects of both ground propagation and tree canopy propagation allows for realistic simulations of battlefield fire propagation.

(4) Another task focuses on providing a compendium of the field of fire and smoke modeling. The compendium also reports on the current physics-based modeling of flame propagation as well as flame spreading in fires, design and operation of gas-fired heat transfer equipment (combustors), and flame spreading over the surface of thermoplastic fuels during the ignition process in hybrid rocket motors.

(5) A subtask focuses on spoken language understanding for battlefield simulations. This work involves studying existing techniques for incorporating speech interactions in simulations. A spoken dialogue system allows a user to interact with a computer application using conversational speech. Web-based tutorial system for this speech-based subproject is being developed to demonstrate the applicability of spoken language-based operators into battlefield simulations and synthetic environments. The ModSAF and NPSNet environments are used to test these speech-based techniques.

(6) High volume of information is used to represent various computer transactions. Managing these extremely large data sets can pose problemsin storage space and processing time. The use of compression techniques to address these issues is investigated under another subproject.

(7) Another task focuses on the transport processes associated with phase change such as boiling (nucleate, film, and flow) which are the most efficient heat transfer modes that are characterized by small temperature differences and high heat fluxes. This effort dovetails with the Ising model for simulating phenomenology in virtual environments.

Army Relevance:

The CGT system is a useful resource for distance learning and offers a tutorial environment for learning computation geometry concepts outside of classroom setting. Its continual development to benefit all ADSRC participants is important, as it minimizes the learning curve and offers a quicker transition to the development and implementation of more advanced research topics in ADS.

Applying parallel and distributed computing methods to the Ising-based simulation allows the minimization of the overall simulation time. This step is important as it leads to near real-time visualization of geometric objects and entities in synthetic environemnts, e.g., virtual battlefield situations. The incorporation of an extended percolation model for fire simulation makes virtual environments, e.g., battlefield simulation, much more realistic. This improves the training of soldiers and facilitates their acquisition of military skills, therefore, enhancing their readiness for battle. Simulation models that represent battlefield situations also help to improve the understanding of phenomenological occurrences and their impact on the soldier.

Distributed simulation environments have played a significant role in an ever-increasing number of critical military and commercial applications. Applications ranging from fire safety training to virtual warfare benefit tremendously from this emerging technology. The virtual battlefield necessitates the inclusion of real-time obscurants such as fire and smoke into the computer simulation in complex ways. These models and simulations will be of wide relevance to government and industry organizations.

The incorporation of spoken language-based operatives in battlefield simulations provides a way to manage multiple participants, e.g., commanding officers or soldiers under training, who are interacting with a simulation using speech.

Simulation systems like DIS use a high volume of information to represent images in its various transactions. The transport of such data in DIS and virtual environments requires highly compressed data with little or no loss of information. The investigation in data compression and image understanding bears on the Army's continual need for maintaining high fidelity data and functionality.

Accomplishments:

The areas of accomplishments have been categorized into 'Research': related to impact on student and faculty research; 'Tools': related to software tools resulting from the ADS-VSE team; 'Student/Labs': related to impact on student research, training, and advancement to graduate studies; and 'Impact and Visibility': related to the overall impact of the ADSRC VSE effort.

Research: In addition to conference presentations, the ADSRC ADS-VSE team published several articles in both conference proceedings and journals. The listing of these results are summarized in the abstracts section (3), along with the faculty and students who participated in the development of these results in section (2) of this annual report. Further the summary pages in section (1) of this report include subsections on accomplishements which detail each task's results.

Tools: The CGT system, a graphics and visualization tutorial system, a spoken language-based dialogue system, an AI-based (fuzzy control) graphics system, simulated fire/smoke models for demonstrating the effect of fire propagation, and the MPI implementation of the Ising model are among the software tools and prototypes developed by the ADS-VSE team.

Student and the ADSRC-VSE Labs: There is a large number of students who are supported through research training, software development, and teaching-oriented training. The ADS-VSE faculty have worked with both undergraduate students and graduate students who are now experienced with computer graphics programming, Virtual Reality concepts, capable of using the SGI (Silicon Graphics, Inc.) workstation environment, and have gained computer networking experiences through constant exposure to these tools. Currently, R. Giroux (FAMU) and J. Brown (FAMU) are ADSRC students pursuing the Masters degree, and W. Ding (FAMU) completed the Masters degree in Sofware Engineering Sciences (FAMU) in 1998. Several past students are currently employed by major companies. For example, students were placed at Lockheed-Martin (Ms. K. Stephens - FAMU), Abbott (D. Howard - FAMU), IBM (K. Carson - FAMU). The spoken language, with JAVA/Web-based graphics and visualization subproject has drawn several interested students in the ADSRC project.

Impact and Visibility: The ADSRC VSE lab have been used for organized workshops, e.g., the FAMU VSE lab has been used by the College of Pharmacy (FAMU) for a 3-day workshop on molecular modeling, which was offered to their graduate and post-doctoral students. The ADSRC-FAMU VSE lab is currently used to support a departmental course in computer graphics, a spinoff from the ADSRC project. The VSE lab at UHD includes a Virtual Environment (VE), including a camera and large-screen display untis, and is used for various virtual reality research activities. The VSE lab at UCF supports various Masters and Doctoral student thesis/projects related to on graphics and visualization.

Further, ADSRC students (Michael Arradondo and Charles Birmingham) participated in this training. Mike Arrandondo collaborated with a research team at LANIA, Xalapa, Mexico on a graphics/visualization and vision project, as an extension of the ADSRC VSE leveraging and collaborative effort. This effort is linked with a NASA-supported research laboratory at Univ. of Houston, led by Prof. Bowen.

 

 

Planned Activities:

The various subtasks, discussed above, have been summarized in section (1) of this report. The summaries contain subsections for activities which are planned for the coming year, and beyond.

The Ising-based simulation research, including the use of distributed and parallel computing techniques, will be continued by developing the underlying mathematics and physics to accommodate assumptions necessary for more realistic simulation. A distributed display of various phenomena, e.g., fire and smoke spread, across multiple computer screens will help elucidate their behaviors observed from multiple viewpoints in simulated situations. This work will integrate findings from the heat transfer subtask.

The effort on heat transfer, will continue with the correlation testing for a wider range oof operating variables such a liquids with widely differing physico-thermal properties (such as cayogenics), heat flux, and pressure.

We will continue to design and develop an advanced software development environment for building spoken language dialogue systems. The development environment will be a visual programming environment, designed to allow a developer to use a drag-and-drop interface for creating a distributed speech application.

Additionally, the ADS-VSE team will explore and extend collaborative work which will integrate AI (fuzziness), Vision (image capture for software control and remote control of terrain vehicles), Distributed Graphics, and Software Engineering techniques which are imperative for developing large systems. Lastly, innovative steps will be taken to increase student teaching, training, research, and placement in graduate programs. On this last point, we would seek relationship with institutions like the Naval Postgraduate School (NPS), Monterey, CA, where much work is undergoing on software requirements engineering for large systems and the University of Pennslyvania, where advanced software development environments research efforts will provide opportunities to our students.

 

 

 

Fire and Smoke Modeling in ADS EnvironmentsH.L. Williams and P.O. Bobbie (FAMU)

Research Objectives and Significance:The main objective for this effort is to design, develop and implement efficient physics-based mathematical models for the computer simulation of fire, smoke, and other phenomenological elements in distributed simulation environments. This theme has been the primary focus of our work during the past year. It is also desirable to create a computer simulation model of the virtual battlefield with greater fidelity and real world characteristics. This feature will allow for realistic simulations of battlefield operations. An ongoing objective is to build a joint research program in the Mathematics and Computer Sciences Departments within the ADSRC Consortium that will focus on useful efficient modeling techniques for systems of particles using physics-based principles. These models and simulations will be of wide relevance to government and industry organizations.

Army Relevance:The incorporation of real-time obscurants into computer simulations will make the virtual battlefield much more realistic. This will improve the training of solders and facilitate their acquisition of military skills which will enhance their readiness for battle. The interactions between smoke, rain, and other elements are unpredictable during war situations. Simulation models that represent battlefield situations help to improve the understanding of such phenomenological occurrences and their impact on the soldier.

Methodology:Heretofore, the two-dimensional Ising model, based on Monte Carlo simulation methods and matrix analysis, has been the primary tool for analysis with respect to the parameters that characterize fire and smoke as a battlefield obscurant. We considered the Ising model as a mathematical model which represents a system of particles, the constituent parts of fire and smoke, for the simulation. Hence, the effects of temperature, magnetization, and force field on the particles as a result of the spinning effect (random motions) of the particles were evaluated mathematically. This allowed an accurate modeling of the various orientations or configurations of the particles, and eventually for simulating the behavior of fire and smoke. A search of the technical literature on fire and smoke has revealed a wealth of research activities using various modeling approaches. By far the most prolific technique involves the use of computational fluid dynamics (CFD) techniques to study fire behavior principally based upon the Navier-Stokes equation. During the current, we have begun to investigate CFD techniques and other physics and mathematical approaches to analyze fire, smoke and other phenomenological elements in our computer simulations.

 

Accomplishments:

A two-dimensional simulation of smoke based upon the Ising model has been implemented in C code in the SGI/MPI environment. This effort has also incorporated fire behaviors into the model. Also a more precise mathematical formulation of the simulation problem has been done. Several program segments in C programming language have been written and implemented to test aspects of the smoke model. For performance considerations, a parallel version of the code has been developed and is being tested.

 

Future Work:

The following tasks will be focused on in our future work.

 

  • Collect, analyze, and distribute specific information in the form of references on existing models of fire and smoke.

     

     

  • Develop mathematical mechanisms to simulate the physical behavior of smoke and fire in terms of temperature, external magnetic force, and motion of smoke particles based on the Ising model.

     

  • Develop a 2-dimensional realization of smoke using the Ising model together with the mathematical mechanisms developed to simulate the physical behavior of smoke and fire. Rotations will be used to simulate 3-dimensional effects in a virtual scene created with existing software tools, such as OpenGl.
  • Develop a direct 3-dimensional realization of smoke and fire using the mathematical mechanisms previously developed to simulate the physical behavior of smoke and fire.
  • Investigate how this proposed analytical, physics-based model for fire and smoke compares with existing work on the simulation of fire and smoke.
  • Conduct a more extensive review of CFD techniques for simulating fire and smoke behaviors. The modeling of fire and its related phenomena have many implications, and academic interests. This type of modeling and the resulting simulation, understandably is a complex undertaking because of all the academic disciplines that are relevant and must be applied to the development of a comprehensive model. This work will result in a compendium of models in the field of fire and smoke modeling and serve as a report on the current physics-based modeling efforts in this important area of research.

An Extended Percolation Model for Fire Propagation SimulationR. K. Guha and C. Wallace (UCF)

Research Objectives and Significance:The main objective for this effort is to design, develop and implement efficient computer simulation of fire propagation by extending the percolation model. The percolation approach has a well documented history in basic fire propagation behaviors through random mediums with the added effects of wind. An extension of this basic model to include topology, non-random mediums, moisture content and effects of both ground propagation and tree canopy propagation will allow for realistic simulations of battlefield fire propagation. These models and simulations will be of wide relevance to government and industry organizations.

Army Relevance:The incorporation of an extended percolation model for fire simulation will make the virtual battlefield much more realistic. This will improve the training of soldeirs and facilitate their acquisition of military skills which will enhance their readiness for battle. Simulation models that represent battlefield situations help to improve the understanding of phenomenological occurrences and their impact on the soldier.

Methodology:The implementation of a percolation model begins by superimposing a lattice of sites and bonds over a given finite region. The sites correspond to vertices and the bond to the edges of an undirected graph G(S,B). The percolation model can be based on either a site approach or a bond approach. We have chosen to use the terminology of the site approach because of it?s intuitive mapping into the terminology of fire spread.

In a fire spread model we start with a finite lattice, thus the percolation process is guaranteed to produce a finite graph. However, if we assume that the lattice can be unbounded, we are presented with the possibility of producing an infinitely large adjacency graph. If we view the system in terms of graph G = (V,E), where V is a set of vertices (sites) and E is a set of edges between the vertices of V, we can then state that if the products of the expectation of the existence of edge x, for all edges in E, is non-zero, the probability of the percolation process producing an infinite lattice L is non-zero.

Several extensions to the basic model are considered. The first is a directed addition to model the effects of wind. The second is the use of a non-homogeneous medium that requires enhancement of the method used to determine the probability of occupation. The third extension accounts for topographic variations in the medium.

As the wind blows a forest fire, the speed of its spread increases. This is to say, that not only is a fire more likely to spread in the same direction as the wind, it will also spread in that direction more rapidly. Given a model where discrete time steps are representative of real-time intervals, an alternative approach to modifying the probability of occupation is needed. To this end we have developed the approach of neighborhood skew. In this approach the speed of the wind determines the size of the neighborhood. The skewing process gives the neighborhood its directional configuration. The result of the variable neighborhood size, the skew of the neighborhood, and the orientation of the originating site relative to its neighborhood, reflects the direction in which we wish to direct the percolation.

Accounting for the effects of non-homogeneous fuel sources requires a mapping strategy to apply to potential fuels. Mappings for ignition threshold, which is separate from the probability of occupation, burn duration, and moisture content has been developed and implemented for a small set of test materials. In this case we are using components of both a site and bond percolation approach. The duration that the site will remain active is also relative to the material type. At this point fuel densities are assumed to be homogeneous. Allowing densities to vary by site would also provide another parameter to modify the burn duration of a site.

The effect of terrain can be significant in the propagative behaviors of a fire. The crest of a mountain often serves as a natural fire brake while steep canyons can serve as a furnace. This is a direct result of heat?s characteristic rise. Using a simple z-buffering technique, each site is assigned a z value or elevation. This value will have an effect on both the bond and site aspect of the model. Sites at a given elevation will propagate to higher elevations with a greater probability than of a lower elevation. In addition the probability higher elevation will ignite will be increased. In both cases, the amount of increase in probability is based on the slope produced by the two sites. The angle of this slope varies from 0 degrees to 90 degrees. By applying the sine function to the angle we generate a number between 0 and 1. This number is then applied to the probability of occupation in the case of the bond model and the probability of ignition for the site model. This output is then added to the probability of occupation/ignition, which provides the modified probability to use in the percolation process.

Accomplishments:

To evaluate the accuracy of the model constructed a test fire was constructed. The fire modeled was the South Canyon fire which started on July 2, 1994. On July 6, 14 firefighters were killed as dry conditions, highly flammable Gambel Oak trees and open grass, high winds (45 mph) and steep slopes (45 to 90 degrees) combined to create a fire which spread at 18 mph with flame 200 to 300 feet high. Applying our original model to the location of the South Canyon fire we found that we had limited my potential for spread to just under 20 feet per second (fps). This fire had spread at a speed of 26.4 fps. By adjusting the scale of effect that wind could play on propagation we were able to meet and exceed the 26.4 fps rate of spread. Once this modification was made a fire similar is speed of spread, duration and direction was easily achieved.

 

One shortcoming of the model was in the modeling of the directional spread of the fire. It would appear that as wind increases to extremely high speeds, it not only decreases the probability of spread in the opposite direction but it also begin to decrease the probability of spread to site perpendicular to its direction. This has the effect of accentuating the directional characteristics of the fire, producing column like spread patterns through homogeneous environments.

Future Work:

The following tasks will be focused on in our future work.

     

  • We plan to continue with our experiments with other known fire

     

     

  • We plan to examine the shortcomings of the current model and plan to update the model to rectify the limitations of the current model.

     

 

 

Parallel Implementation of the Ising Model using MPI

Patrick Bobbie, H. Williams, Rupert Giroux, Shawn Roper, C. W. Birmingham,

M. Arradondo (FAMU)

Research Objectives and Significance:

To partition the lattice structure of the Ising Model for distributed/parallel computation in order to minimize the overall simulation time. This step is important as it leads to near-real time visualization of the virtual battlefield phenomena.

Methodology:

The Ising model is a powerful tool that facilitates the study of statistical mechanical systems which are often characterized by the existence of phase transitions. Phase transitions of statistical

mechanical systems include phenomena like the transition of a material from an anti- ferromagnetic state to a ferromagnetic state or the evaporation of water into steam. In order to model phase transitions of a statistical mechanical system, the energy levels or states of a large number of particles must be analyzed. The Ising model accomplishes this task in two ways. First, the particles are arranged in a lattice and assigned energy levels. Second, the interaction of the

particles is analyzed based up on the energy levels. By using two or three-dimensional lattices and studying the interactions of a cell with its neighboring cells, a model of a statistical mechanical system with phase transitions is possible. However, real-time simulations of large systems with large two dimensional or three dimensional lattices require significant processing power. Using a distributed computing environment with a Message Passing Interface (MPI) system can reduce the time required to process large lattices mathematically and graphically

Accomplishments:

The parallel version of the Ising algorithm is running in the MPI environment. The simulation is being mapped into a datafile for conversion into OpenGL format for visual representation and understanding of the inherent phenomenology.

Physics-Based Models of Fire Simulation

 

P. Sharma, O. Bignall (GSU), H. Williams, P. Bobbie (FAMU), O. Sirisaengtaksin (UHD),

R. Guha (UCF)

 

Research Objectives and Significance:

An accurate understanding of flame propagation mechanism and the modeling of fire have many implications for public safety, design of industrial heat transfer equipment, defense applications, and academic interests. There are many important problems related to the general process of flame propagation along the interface between a gaseous oxidizer and a liquid or solid fuel. The problem covers a very large field, including processes such as flame spreading in fires, design and operation of gas-fired heat transfer equipment (combustors), and flame spreading over the surface of thermoplastic fuels during the ignition process in hybrid rocket motors. Over past four decades a vast amount of research work has been conducted to understand the mechanism of fire propagation. The objective of this work is to conduct an elaborate literature search and review the approaches that different investigators have undertaken to study and model this complicated process of fire propagation. In short, this work is a compendium of the field of fire and smokes modeling and serves a report on the current physics-based modeling in this important area of research.

Army Relevance:

Distributed simulation environments have played a significant role in an ever-increasing number of critical military and commercial applications. Applications ranging from fire safety training to virtual warfare benefit tremendously from this emerging technology. The virtual battlefield necessitates the inclusion of real-time obscurants such as fire and smoke into the computer simulation in complex ways. Since real time is an important factor in distributed simulation environments, an inclusion of physics-based model of fire should have fewer computations and provides a realistic approximation algorithm.

Methodology:

As mentioned above the main objective of this research work is to conduct a comprehensive survey and analysis of the existing physics-based models for fire behavior, and flame and smoke propagation. A thorough literature search has been undertaken. A critical review of the literature suggests that a true mathematical model for flame and fire propagation would require the solution of a system of coupled, multidimensional, elliptic, nonlinear partial differential equations that would include variable material properties, appropriate gas phase chemical kinetics, solid phase pyrolysis mechanisms, heat and mass transfer convective terms for multi-component systems, and heat transfer terms due to conduction, and radiation. The solution of this full problem, even after considerable simplifications, is extremely difficult. For this reason the analyses developed to date have treated the problem at different levels of complexity. Researchers have investigated fire and flame propagation from a variety of perspectives depending upon the areas of application and their research interests. At this point, we have broadly classified the approaches that different investigators have undertaken to study this complicated process and written a survey article describing these approaches. These approaches are:

     

  1. dimensional analysis for empirical correlations and scaling applications in fire research,

     

     

  2. extension of low temperature heat transfer correlations,

     

     

  3. computer simulation fire models,

     

     

  4. mathematical modeling of flame spread over the surface of combustible material, and

     

     

  5. smoke modeling of pool fires.

     

Accomplishments:

This is a preliminary work to understand the behavior of fire and flame propagation. An initial survey article has been written on this work. This work has been accepted for presentation at 1998 Conference on Simulation Methods and Applications (CSMA 1998) to be held at Orlando, Fl, from November 1-3, 1998.

 

Planned Activities:

     

  1. The results of this survey will be used to identify the key parameters which affect the flame propagation and how these parameters can be used to develop a real-time fire model in a distributed simulation environment.

     

     

  2. A more comprehensive analysis of available research work on flame and smoke propagation in a variety of fires is planned.

     

A rigorous mathematical model, taking into consideration, important factors needs to be developed.

 

Computational Geometry Tutorial

Patrick Bobbie (Lead), M. Arradondo (student),

C. W. Birmingham (student)

 

Research Objectives and Significance:

To enhance the Computational Geometry Tutorial (CGT) system in order to provide more advanced topics/concepts. The enhancement included the introduction of Virtual Reality (VR) modeling component for animation, quality rendering, and visualization of graphic objects.

Army Relevance:

The CGT system is being developed to support and offer a tutorial system on how to construct and visualize geometric objects for ADSRC students. The ultimate goal is to provide students with the capability to develop graphic objects for visual enhancement of virtual battlefield in war-game simulations.

Methodology:

The CGT tool implements fundamental operations: translation, rotation, and scaling which allow graphic object manipulation using matrix operations. Further, the system uses several low-level graphic constructs and their applications within a simulation environment such as VRML, Java applets, Bezier curves, B-Spline, NURBS, Voronoi Diagrams and Convex Hull.

Accomplishments:

The CGT system has been enhanced greatly and currently available on-line (web-based) access and usage for faculty and students. The summary of the results was published in a conference

proceedings at the ADMI98 Conference, held at Houston, TX, June 26-27,1998.

 

 

Computational Geometry and Synthetic Environments

Patrick Bobbie (FAMU)

Research Objectives and Significance:

To develop the mathematical basis of the geometric functions for the CGT system.

Army Relevance:

The CGT system is a useful resource for distance learning and offers a tutorial environment for learning computation geometry concepts outside of classroom setting. Its continual development to benefit all ADSRC participants is important, as it minimizes the learning curve and offers a quicker transition to the development and implementation of more advanced research topics in DIS.

Methodology:

Mathematical concepts such as convex hull, B-spline, Bezier-curves, etc., were analyzed, simplified, and integrated into the CGT system to enhance the synthetic environment. The synthetic environment methods focused on the application of accurate visual and aural data in physical and behavioral models.

Accomplishments:

In addition to the new theoretical concepts introduced and currently being implemented, a paper was presented at the Workshop on Distributed Simulation, AI, and Virtual Environments, Mexico City, February 22-23, 1998. The collaboration with the researchers at LANIA was significant for our visibility.

Mathematical Modeling in Nucleate Pool Boiling Heat Transfer

     

  1. Sharma, A. Lee (GSU), H. Williams (FAMU)

     

 

Research Objectives and Significance:

Transport processes with phase change such as boiling (nucleate, film, and flow) are the most efficient heat transfer modes that are characterized by small temperature differences and high heat fluxes. Therefore, they are employed in most thermal energy conversion and transport systems, as well as in heating and cooling devices, e.g., the dissipation of high heat fluxes from VLSI microelectronic packing using nucleate boiling with air jet impingement. Boiling has been one of the most efficient methods for cooling nuclear reactors, space vehicles, and electronic and microelectronic devices. This research work will be helpful in designing efficient and reliable heat transfer equipment where nucleate boiling is an important heat transfer mechanism. The objective of this research work is to develop a physics-based model for calculating heat transfer rates in nucleate pool boiling heat transfer of pure liquids over a wide range of pressure and heat flux.

Army Relevance:

The mathematical model developed in this work for calculating heat transfer rates in nucleate pool boiling can possibly be used to design efficient heat transfer systems for the cooling of electronic devices such as supercomputers.

Methodology:

The mathematical model for heat transfer coefficient uses the bubble departure diameter and bubble emission frequency. In this work the equations for these two pertinent quantities are obtained by considering both static and dynamic forces on a growing vapor bubble. The static forces are surface tension and the buoyancy and the dynamic forces are liquid inertia, bubble inertia, and the drag forces. Theoretical equations are developed for all these forces considering underlying physics and thermodynamics. These equations were then eventually obtained in terms of physico-thermal properties of liquid and vapor and were used to develop equations for bubble departure diameters and bubble emission frequency. Magnitude of these forces were calculated for refrigerants, hydrocarbons, and distilled water for a wide range of heat flux and pressure. These calculations provide basic understanding of boiling phenomena, e.g., how the magnitude of these forces change heat transfer coefficients with heat flux and pressure.

Accomplishments:

The correlation for heat transfer coefficient has successfully predicted the data of various investigators conducted for the liquids and surfaces of differing properties over a wide range of heat flux and pressure. This research work has been accepted for 11th International Heat Transfer Conference held at Kyongju, Korea, from August 23-28, 1998 and published in its Proceedings.

Planned Activities:

The correlation will be tested for a wider range of operating variables such as liquids with widely differing physico-thermal properties (such as cayogenics), heat flux, and pressure.

1.3 Knowledge Based Systems (ADS-KBS) Task

Task Coordinator: Richard Al󼯰>

OVERVIEW

 

Objectives and Significance:

The main objective for the third year of this project was to further our knowledge and to attack problems as related to software issues, in general, and specific applications in advanced distributed simulation in particular. Our foci are on: Knowledge Acquisition and Machine Learning; Modeling Tools for Systems Under Uncertainty; Decision Making Models in the Presence of Imprecise Information; Computer Security Model Based on Uncertain/Partial Information; Concurrency Control and further development of a prototype of an intelligent autonomous navigation system. We show how the following tools significantly impact the above problems: Fuzzy Logic, Fuzzy Neural Networks, Petri Nets, Markov Chains, The Dempster-Shafer Theory of Evidence, Relevance Filtering, Analytical Hierarchical Processing, Object Oriented Databases

Our efforts produce a major impact on how future software may be written. In Machine Learning we provide methods of refinement to update weights for neural networks and improve our knowledge of membership functions in fuzzy rules. In Systems under Uncertainty we reduce the number of states of the Markov Chain with the relevant Petri Nets and generalize the standard fuzzy systems. This provides models of Systems using fewer and more powerful rules. In models for decision-making and, in particular, for estimations of project completions, we replace the classical PERT approach with our BIFPET method and compare the advantages of the latter method. Our Computer Security Model that operates in an environment where the user characteristics are uncertain offers a flexible access control to users as a function of their perceived potential hostility. In Concurrency Control, information related to particular databases may only be partially available and fuzzy logic is applied. Using fixed and dynamic grids, the appropriate level of information to be sent or received is defined. In the Autonomous Navigation Prototype System, hierarchical fuzzy controllers were developed to alleviate an explosion in the number of rules. Fuzzy CLIPS and OpenGL were used to exhibit the model.

Army Relevance:

Combat conditions provide very uncertain environments and first approximations of neural nets and fuzzy systems may be too rough. Combining neural nets and fuzzy rules offer the advantage of a system with learning capabilities while preserving a human like type of reasoning. Our approach is novel in that Dempster-Shafer Theory of Evidence and Interval Computations are used to create such systems. When making decisions under uncertainty we develop methods to compare fuzzy sets or quantities arising from them. In order to analyze complex systems, Petri Nets have been used which typically lead to a combinatorial explosion of states. We address this by aggregating sets of states, which renders Petri Nets more applicable under combat conditions. Computer Security is a prevalent problem for which we offer a flexible approach to software accessibility for users whose characteristics are only partially known. When dealing with large databases, concurrency control techniques allow a large number of simultaneous users. Fuzzy Concurrency Control provides additional insight on how to control the concurrency problem. In HLA it is important to define the minimum rate to maintain an accurate and consistent view of the ?virtual world?. Our fixed and dynamic grid approaches offer a method to define fidelity. Algorithms for efficient data distribution, filtering effectiveness and computational overhead are analyzed within this architecture. Hierarchical fuzzy controllers alleviate the combinatorial explosion in the number of rules generated by previous approaches for the autonomous navigation prototype system. Interval computation better models the real world and provides reliable solution bounds for many application problems.

Accomplishments:

Imprecise and conflicting data was input into classical neural nets and the effects were analyzed. Fuzzy rules were tuned via the Theory of Evidence and weights of neural nets were set to intervals. In this fashion, we studied the inadequacy of classical neural nets and classical fuzzy rules and remedies to these deficiencies were proposed. An application to spinal chord stimulation for chronic pain management was outlined. Fuzzy Petri Nets for analyzing uncertain and variable environments were developed. This provided a significant reduction in the number of transition states. This model was applied to a problem of policy selection where the cost of each decision was only partially known. Fuzzy expectations were developed in a general setting. Fuzzy rules were analyzed where membership functions did not have precisely defined values. The response of these systems to imprecise rules was modeled. An alternate approach to the PERT method for analyzing project completions was proposed. Comparisons of complex quantities arising from fuzzy sets were proposed and an application was made to reduce the marketing uncertainties in the development of new products. A strategy for software access involving uncertain quantities such as expected losses, user hostility, and allowable damage amounts was developed. The performance of protocols for multi user access for database has been shown to depend heavily on how transactions and subtransactions are formed. We study these protocols when not all of the information is available. Hierarchical fuzzy controllers reduce the number of rules in our expert system associated with autonomous navigation system.

 

 

 

Knowledge Acquisition and Machine Learning Using Fuzzy Logic and Fuzzy Neural Networks

R. Alo., A. de Korvin, O. Sirisaengtaksin , S. Hashemi (UHD), V. Kreinovich (UTEP)

 

Objectives and Significance:

This effort focuses on the training of neural networks, the tuning of fuzzy rules, and some applications to the medical field. Fuzzy rules and neural networks typically need refinements in order to work well. Our approaches provide some methods of refinements to update weights for neural networks and to improve our knowledge of membership functions in fuzzy rules. A potential application of neural nets to the stimulation of the spinal chord has been proposed.

Army Relevance:

Since combat conditions provide very uncertain environments our first approximations of the appropriate neural nets or fuzzy systems are very rough and refinements are definitely needed. One of the main advantages of using fuzzy rules as opposed to crisp rules is that fewer of the fuzzy rules are needed to describe the behavior of a system. The disadvantage of using fuzzy systems is that learning does not take place. Since neural nets form a known body of knowledge where learning can take place, in recent years researchers have combined neural nets together with fuzzy systems to obtain the convenience of human like rules with the learning capabilities of neural nets. We use the Dempster Shafer Theory of Evidence to obtain a novel approach to construct a fuzzy system with learning capabilities.

Methodology:

The Dempster ? Shafer Theory of Evidence was used. Individual training sets were used as sources of evidence to shape the membership functions. Also the weights of neural nets were assumed to be interval valued and appropriate convergence algorithms were used. In these works we develop methods to refine sets of fuzzy rules in order to obtain accurate systems. The traditional approach is to combine fuzzy systems with neural nets. The neural net is used to refine the parameters defining the membership functions involved in the rules. Here a different approach is taken and we use the Dempster ?Shafer Theory of Evidence to refine the parameters. Each element of the training set is used as a source of information to generate a corresponding mass on the set of rules instead of a probability or strength coefficient on single rules. We also suggest how the evidence for a rule obtained by the training set could be combined with a genetic algorithm approach to generate a fitness function and therefore to obtain reasonable values for membership functions.

Accomplishments:

Imprecise and conflicting data was input into classical neural nets and the effects were analyzed. Fuzzy rules were tuned via the Theory of Evidence and weights of neural nets were set to intervals. In this fashion, we studied the inadequacy of classical neural nets and classical fuzzy rules and remedies to these deficiencies were proposed. Based on our D-S Theory approach a fitness function for possible Genetic Algorithms was proposed. An application to spinal chord stimulation for chronic pain management was outlined.

Planned Activities:

We plan to develop intelligent control systems for automated moving of objects with or without obstacles using fuzzy neural networks. Also we plan to develop a prototype of machine learning systems with fuzzy rules as input patterns as well as we plan to apply our tools for further study of the spinal chord stimulation process. In addition, we plan to look further into the relationship between the Dempster ? Shafer Theory of Evidence and Genetic Algorithms as briefly mentioned above.

 

Modeling Tools for Systems under Uncertainty Using Petri Nets, Fuzzy Logic and Markov Chains

R. Alo, M. Beheshti, A. de Korvin, S. Hashemi, C. Hu, O. Sirisaengtaksin (UHD),

Q. Kleyle (Indiana University, Purdue University at Indianapolis),

G. Quirchmyer (University of Vienna),

Objectives and Significance:

Stochastic Petri Nets have been used in the context of Stochastic modeling since the 1970?s. However a difficulty often encountered is a proliferation in the number of states in the associated Markov Chain. In the present works, we propose a solution to this proliferation problem by introducing the concept of a fuzzy Stochastic Petri Net. A related problem is to study Markov Chains where states are fuzzy subsets of some finite state space. An additional pursuit is: Given a set of fuzzy rules where membership functions are not totally determined and a fuzzy input where the membership functions are known, then one seeks to predict the output given by the system. This obviously generalizes traditional fuzzy systems.

Army Relevance:

Petri net techniques can be adapted for quantitative systems analysis by the introduction of temporal specifications by associating a firing delay with transitions. Since a firing delay is probabilistic in nature, the introduction of this component into the basic Petri Net leads to the development of a Stochastic Petri Net. The latter have been used mainly in the context of systems performance evaluation however Stochastic Petri Nets are a powerful and versatile tool for stochastic modeling in general. The main difficulty is the proliferation of states as mentioned earlier and we address this problem here.

Methodology:

We begin by considering here a Markov Chain whose states are fuzzy sets defined by some finite state space X. We consider Chains to have only a finite number of fuzzy states. Fuzzy transition probabilities arise naturally when the transition from one state to another is described by such mathematically imprecise phrases as ?very likely?, ?likely?, ?unlikely?, etc. We have looked at procedures for decision making under this type of uncertainty. We describe a method to obtain crisp transition probabilities when the states are finite fuzzy sets. Related to these investigations, we have developed a method of reducing the number of states involved in a Stochastic Petri Net by lumping together different states into fuzzy aggregate states. This structure embodies partial information about a collection of crisp subnets. In particular, we have shown how to compute the stationary probabilities of reaching specific output nodes starting from certain input nodes. These stationary probabilities are fuzzy.

Accomplishments:

Fuzzy Petri Nets for analyzing uncertain and variable environments were developed. This provided a significant reduction in the number of transition states. This model was applied to a problem of policy selection where the cost of each decision was only partially known. Fuzzy expectations were developed in a general setting. Fuzzy rules were analyzed where membership functions did not have precisely defined values. The response of these systems to imprecise rules was modeled.

 

Planned Activities:

We will analyze how to fire all of the rules not just the dominant ones. A relatively recent concept that generalizes fuzzy sets of type II is that of ?evidence set?. A natural problem is to investigate how to fire a set of rules described by such evidence sets. In particular one would like to define an appropriate concept of strength of a rule in such a setting. A starting point might be to obtain generalizations of Jain?s comparison algorithm, which compares fuzzy quantities to the setting of evidence sets. In addition, we would like to investigate how to defuzzify evidence set. A problem not considered until now is how to use training set to defuzzify better sets of type II. In the present work we have already defined the concept of conditional expectation applied to a fuzzy domain. We foresee many potential applications for this concept, particularly in the area of management and finance for which crisp data are not available.

Models for Decision Making in the Presence of Imprecise Information

R. Alo, A. de Korvin, K. Omer, M. Shipley (UHD), C. Allen (FAMU), R. Guha (UCF),

R. Kleyle (IUPUI), Phillip Siegel (Loyola University of Long Island)

 

Objectives and Significance:

In many situations, the total information necessary to reach a decision is not available. The underlying probability distribution may only be approximately known, as is the same for the payoff resulting from taking a specific course of action. In other situations, the relative order of importance of different factors contributing to selecting a course of action is only roughly known. The main purpose of these works is to develop tools to make an intelligent decision under such conditions. An additional consideration is that throughout the years the Beta Distribution was used to model variable activity times in the Program Evaluation and Review Technique (PERT). We present here two variations of a fuzzy probability model for project management. We use the concept of Belief in Fuzzy Probability Estimations of Time (BIFET) to analyze human judgment rather than go by stochastic assumptions to determine project completion times.

Army Relevance:

We have obtained several methods to compare fuzzy sets or quantities arising from fuzzy sets. Such comparisons are important, as they constitute tools to make decisions under uncertainty. Sometimes uncertainty arises from not precisely knowing probability distributions and/or payoffs associated with decisions. Sometimes uncertainty arises from experts being unwilling to put precise numbers indicating the relative importance of attributes in pair wise comparisons. In such cases one may want to take a consensus. In addition researchers have leveled some criticism at the PERT method. In practice the project manager must use three alternative estimates or three alternative Beta distributions depending on the skewness of the activity times. It has been suggested that skewness of the data can impact the justification for accepting the Beta distribution to determine mean activity times. We address the above-cited problems.

Methodology:

The classical Jain?s approach to comparing fuzzy quantities deals with sets of finite support only. We begin by considering sets with bounded support and then extend this to sets with unbounded support that vanish at infinity. An alternate approach which ?cuts off? sets with unbounded support is proposed. This latter method falls somewhere in between the classical defuzzification and the maximizing set approach. We generalize the Analytical Hierarchical Process (AHP) method to the fuzzy setting to be able to make a decision based on a consensus of experts who are not certain about making a pair wise comparison between alternate decisions. Moreover we handle the case where the experts have different capabilities. This latter case involves comparing quantities that generalize fuzzy sets of type II. Also a comparison is made between the BIFPET and the PERT approaches for round block production of a machine project. One of the many findings is that with BIFPET the activity supervisor may be fairly confident in the optimistic completion time and if the project manager has a strong belief that this confidence is warranted, the expected completion time should be adjusted downward toward the optimistic time.

 

Accomplishments:

An alternate approach to the PERT method for analyzing project completions was proposed. Comparisons of complex quantities arising from fuzzy sets were proposed and an application was made to reduce the marketing uncertainties in the development of new products.

Planned Activities:

We plan to further study the consensus of experts with different capabilities when they are unwilling to precisely assign scores while making pairwise comparisons between alternate decisions. In particular the objects arising from such considerations seem to be of mathematical interest as they generalize fuzzy sets of type II. We plan to demonstrate the usefulness of the BIFPET methodology in cost efficient crashing of project activities. Such an extension is important for insuring adequate assessment of the daily need for specific resources in order to crash activities on the critical path. Also we plan to look at applications, somewhat similar in spirit to our work on reducing the uncertainty in new product development.

 

Autonomous Navigation Systems With Hierarchical Fuzzy Controllers

M. Beheshti, A. Berrached, O. Sirisaengtaksin, D. Smith and G. Kutiev (UHD)

Research Objectives and Significance:

The main objective of this project is to develop a model to autonomously navigate a vehicle by using hierarchical fuzzy controllers. The control rules of the fuzzy controller was formulated by applying both a general autonomous navigation scheme that mimics how humans behave in maneuvering a vehicle and hierarchical structure of rules. Our scheme implements fuzzy if-then rules that characterize humans' behaviors to autonomously control a vehicle. The hierarchy of fuzzy controllers was utilized to avoid combinatorial explosion in the number of rules occurring while increasing the number of sub-conditions in the antecedent part of each rule. These fuzzy controllers can be extended with minor modifications for any type of vehicles or planes, and any environment or terrain. Computer simulations were performed on a vehicle with different road conditions using FuzzyCLIPS and OpenGL to exhibit our model.

Army Relevance:

The results of this project provide an alternative way to navigate a vehicle in distributed simulations. This result can also be extended to model any plan actions with route planing simulations. Since fuzzy rules are obtained from experts in the form of linguistic information, our model can be very flexible and reliable.

Methodology:

Fuzzy controller is a very useful and efficient approach for utilizing expert knowledge in if-then form. It provides an algorithm that can convert the linguistic control strategy based on expert knowledge into automatic control strategy. It shows in previous works that the fuzzy controller yields results superior than those obtained by conventional control algorithms. In particular, the methodology of the fuzzy controller appears very useful when the processes are too complex for analysis by conventional quantitative techniques or when the available sources of information are interpreted quantitatively, imprecisely, or uncertainly. The dynamic behavior of a fuzzy system is usually characterized by a set of linguistic rules based on expert knowledge. The expert knowledge is usually of the form

IF (a set of conditions are satisfied) THEN (a set of consequence can be inferred).

For example,

IF (Speed is High) AND (Braking_Distance is Short) THEN (Brake_Applied is Heavy).

This is very useful in designing not only control systems with linguistic dynamics also control systems with complex dynamical behaviors. The hierarchical fuzzy controllers can be developed to alleviate a combinatorial explosion in the number of rules occurring while increasing the number of sub-conditions in the antecedent part of each rule. Because the number of rules can grow exponentially

 

Figure 1. Hierarchical structure of the fuzzy controller for four input variables

large with respectively to the number of variables and number of linguistic terms used by each variable. For example, in Figure 1 a control system with 4 inputs and each input has m number of linguistic terms. A single controller with 4 inputs yields m4 rules, but a hierarchical controllers with 2 inputs at each level yields only m2 + m2 + m2 rules. The following figure is an example of a rule structure that was implemented.

 

It is natural to implement FuzzyCLIPS to code all the fuzzy rules that we have derived since FuzzyCLIPS is a programming language that was developed especially for linguistic rule based. We developed two graphical simulation scenarios, highway environment and war zone scenario. OpenGL was utilized to generate graphics such as terrain, vehicles, tanks, airplanes, and obstacles. OpenGL is a set of graphic libraries that is a flexible interface to the graphic hardware of the Silicone Graphic machine. The libraries support over a hundred routines that setup the light source and every object to be displayed. Both FuzzyCLIPS and OpenGL code were organized to construct simulations. These simulations were run on a SGI machine. Each simulation is about 2-4 minute long.

Accomplishments:

We have developed a prototype for autonomous control systems by using hierarchical fuzzy controllers in navigating vehicles and route planning. The control rules of the fuzzy controller was formulated by applying both a general autonomous navigation scheme that mimics how humans behave in maneuvering a vehicle and hierarchical structure of rules. Computer simulations were performed on a vehicle with different road conditions using FuzzyCLIPS and OpenGL to exhibit our model.

Computer Security Model Based On Uncertain/Partial Information

A. Berrached, M. Beheshti, A. de Korvin, C. Hu, O. Sirisaengtaksin (UHD)

Research Objectives and Significance:

Advances in computers and telecommunication have greatly expanded user requirements, applications and the tools available to all users. Users have become more and more dependent on the services provided by networked systems where data, computer programs, and highly sensitive information are kept in (geographically) dispersed systems and exchanged over telecommunication facilities. Distributed processing systems have emerged to provide the means through which networked systems cooperate to process users tasks in seamless and efficient fashion using the communication and computing resources provided by the underlying network structure. Such systems are becoming common in various areas such as banking (e.g. Electronic Fund Transfer system), trading (Electronic Data Interchange), and computer simulation.

Distributed systems provide tremendous new opportunities and benefits to their users but also raise new challenges. Because of the many different components, functions, resources, and users and the tight coupling between the cooperating systems, security in distributed systems is more difficult to achieve than in regular computer networks. The framework for security in distributed systems is the Open Systems Interconnection (OSI) Security Architecture. The OSI Security Architecture, however, is not an implementable standard; it is just a framework that defines security concepts and services that are used in the open-system security field.

The main objective of this work is to present an access control security model that determines whether a user is permitted to access and perform particular operations on particular data sets. Given the level of hostility of a user in a distributed system, and the sensitivity level of the data effected by the requested service, the local host/security guard is called upon to evaluate whether such a request can be safely granted. In general, information such as the hostility level of a remote user and the sensitivity level of a particular data set are either uncertain or only partially known by the local host, or both. Using fuzzy sets to represent uncertain/incomplete information, we present a framework that allows a local host to determine access permission based on such information.

Army Relevance:

This work may have applications for security issues on distributed computer networks where the systems allow general access for trustworthy users.

Methodology:

We developed a security model with a user xi from a defined set of users X = {x1, x2, ..., xm} that have a desire to perform a particular operation oj on data set dk, where oj is an operation from the predefined set of operations O = {o1, o2, ..., ol}, and dk is a data set from D = {d1, d2, ..., dn} that is maintained by a given organization. In this framework, their level of trustworthiness or hostility characterizes users. Hostility level can be determined based on several factors that can be deduced from information that is available about the user, such as the user's clearance level and the security services provided by the remote host. In addition, we define t as the amount of damage that the organization can tolerate from having a user perform operation oj on data set dk.

If PHi is the probability of user xi being hostile, t is the amount of damage that an organization can tolerate, and EWjk is the worst loss expected from performing operation oj on data dk, then the expected loss Eijk can be computed to determine whether user xi should be allowed to perform operation oj on data dk. Obviously, the degree of the expected loss varies with the value of user hostility and is always less than or equals to the worst expected loss.

In general, information such as expected losses, user hostility, and allowable damage amount are very difficult to assess precisely in numerical terms. So it is natural to express them in the form of fuzzy sets. In linguistic terms, a user can be defined as very hostile, somewhat hostile, or not hostile, and amount of damage can be expressed as very high, low, very low. In fuzzy expressions, somewhat hostile, for instance, can be expressed as:

Ph = .9/.2 + .8/.1

where the supports (i.e. .2 and .1) are the probabilities of the user being hostile and the .9 and .8 are the membership values. Damage amount can also be expressed in a similar fashion, with the supports being expressed in dollar units.

We establish a procedure to determine whether a user xi should be allowed to perform the operation oj on data dk as follows:

1. Find expected loss Eijk by evaluating EWjk ? PHi.

2. Apply Jain's method of comparison to compare Eijk with t by constructing a maximizing set M from fuzzy sets Eijk and t .

3. Compute Eijk ټ/font> M and t ټ/font> M then make a comparison between the sets. If the greatest membership value of Eijk ټ/font> M is greater than the greatest membership value of t ټ/font> M, then the user xi is not allowed to perform the operation oj on the data dk. Since the expected loss form the user xi to perform the operation oj on the data dk is larger than the allowable total loss of damage. On the other hand, if the greatest membership value of is less than the greatest membership value of t ټ/font> M, then the user x is allowed to perform the operation oj on the data dk.

Accomplishments:

We develop a framework for security model using fuzzy sets to represent uncertain/incomplete information. The model allows a local host to determine access permission based on whether a user is permitted to access and perform particular operations on particular data sets.

 

Applying Fuzzy Logic to an Object-Oriented Concurrency Control

M. Beheshti, A. Berrached, A. de Korvin, O. Sirisaengtaksin (UHD), J. Alsabbagh (GSU), M. Bassiouni (UCF)

Research Objectives and Significance:

Object-oriented database systems (OODBS?s) deal with complex data types hence involve long transactions. Long running transactions often imply the use of a large number of resources that are inaccessible to other incoming transactions. The main objective of concurrency control mechanisms is to ensure the serializability of concurrent transactions. In addition, a great deal of research has been done in past several years to develop new algorithms that provides more efficient resource access to concurrent transactions. Our study has also shown that information related to the particular database, application domain and the transactions being executed can be used in the process of identifying the groups. In many situations, however, such information is only partially available or imprecise. This research applies the fuzzy set theory to a concurrency control mechanism for the cases where needed data is not fully available.

Army Relevance:

This work can be applied to some of HLA/RTI services such as Ownership Management Services.

Methodology:

A concurrency control mechanism called Group Protocol (GP) for Object-Oriented Database Systems was previously developed, which is a combination of Two-Phase Locking (2PL) and Serialization Graph Test (SGT) techniques. The Group protocol has the potential of achieving more concurrency at different levels of granularity by subdividing each transaction into a set of subtransactions each consisting of a group of operations. There are two types of groups, the Update Group (UG) and the Read Group(RG), which are different because of the type of nodes they contain and the order in which they release their locks. Our performance evaluation results have shown that the performance of this protocol depends to a great extent on how the groups are formed. In this research we develop a methodology based on the fuzzy set theory that allows us to optimize the performance of Group Protocol based on the available information. The idea basically is as follows: There is a threshold for when we let the transactions change a UG to an RG (to allow more parallelism hence, improve the performance). Each transaction is associated with some information, I1, I2, ....,Ik . The idea here is to compare the fuzzy information values I1ټ/font> .... ټ/font> Ik with fuzzy threshold using Jain's method. If I1ټ/font> .... ټ/font> Ik exceeds threshold, the UG will be not changed to an RG. From history we have fuzzy probabilities about the frequency of use of database files or their records. This information combined with the information about each transaction is the fuzzy value compared with threshold. Next subsection, the Mathematical Model, explains how the fuzzy approach is applied to the group protocol (GP).

In applying this approach to the Group Protocol(GP), the main issue is to identify the Update and Read groups as is done in the original algorithm (crisp case). It is clear by the definition that, Read groups can execute in parallel whereas Update groups can not. The fuzzy information may provide an opportunity to switch an Update group to a Read group to gain more parallelism. Since the information is fuzzy (partial), it might not be totally true, however, the transactions are tested at the end of their execution using SGT (serialization graph testing) as part of the original mechanism. And in the case of a conflict, they will be undone to maintain the consistency of the overall database.

Accomplishments:

This research introduced a fuzzy concurrency control technique to improve the performance for long-running transactions where partial information is available. This is an extension to a previously developed concurrency control mechanism Group Protocol (GP). The partial information will provide some additional insight to the application. This may give us an opportunity to switch an update group to a read group to increase the degree of parallelism and improve the overall performance. The grouping process of the Group Protocol is deterministic indicating the order in which the objects in a transaction are traversed. A directed acyclic graph is generated for each transaction. Therefore, no recursion, or looping is involved in each transaction graph.

By adjusting the GP mechanism based on the fuzzy information, the concurrency control mechanism becomes more useful to different types of applications. This is a step toward the introduction of a concurrency control mechanism applicable to a wider range of applications. Whereas, generally concurrency control mechanisms are very much application dependent. The findings of this research have been accepted for publication.

Planned Activities:

The data model used in this research is quite general and abstract. So, many details, such as, where the actual data is stored (in which object or class) are not addressed here. These details are considered as part of future work to further improve the performance.

 

 

On Interval Weighted Three-layer Neural Networks

M. Beheshti, A. Berrached, A. de Korvin, Chenyi Hu, and O. Sirisaengtaksin(UHD)

Research Objective and Significance:

The objective of this project is to study interval neural networks with imprecise input/output data and neurons. In practice, input and output data may be within a range rather than precise numbers. Neurons may not be precise in neural networks. The interval neural networks we studied may applied to these cases.

Army Relevance:

Interval neural network is a general concept. The algorithms we discussed can be used to train neural networks the Army uses.

Methodology:

 

  • Study the general mathematical model of three-layer neural networks.

     

     

  • Define the concept of interval neural network.

     

     

  • Categorize interval neural networks into two types.

     

     

  • Finding parameters of type 1 interval neural network by solving interval nonlinear systems of equations.

     

     

  • Solving type 2 interval neural network problem by interval branch-and-bound algorithm.

     

     

  • Applying available interval software packages.

     

Accomplishments:

We have defined interval three-layer neuron network, and categorized them into two types. We also studied the general algorithms to solve interval three-layer neural network problems.

Planned Activities:

We plan to develop specific software packages to solve interval neural network problems. We also plan to study Army related applications in this field.

 

 

Spoken Language Dialogue in a Distributed Environment

C. Allen, P. Bobbie, T Weatherspoon, D. Washington (FAMU)

Research Objectives and Significance:

The focus of our research is on developing a methodology and software development environment for creating muli-user spoken language systems.

Army Relevence:

This work provides a way to manage multiple participants who are interacting with a simulation using speech.

Methodology:

We explored several techniques for multi-user dialogue management. We also began developing a visual-programming environment for creating spoken language systems that include multiple participants

Accomplishments:

Two undergraduate students were given projects related to this research. Both worked in the area of spoken language understanding. Their projects were designed to help them gain familiarity with the field of spoken language understanding and to give them experience in using the tools available for spoken language systems.

 

 

 

 

1.4 Student Outreach and Training Program Task

Task Coordinator: Muddapu Balaram

OVERVIEW

Objectives and Significance:

We encourage socially and economically disadvantaged students whose interest lie in computer science, mathematics, or engineering to pursue careers and graduate programs in these areas. Several educational enrichment opportunities are provided at the pre-college and college levels. Our efforts are focused to increase the national visibility of career opportunities in computer science, mathematics, and the sciences in general, and to increase the number of minority students in these fields where there is under-representation. Because of this effort, our goal is to improve the recruitment and retention of talented students in computer science, mathematics, and the sciences.

Army Relevance:

We are providing foundations in mathematics and computer science for pre-college and college students. The primary goal is to increase the quantity and the quality of the socially and economically disadvantaged students into graduate programs and careers in computer science, mathematics, science, and engineering professions. We will enhance and maintain this pipeline of pre-college, college, and graduate school programs.

Accomplishments:

We have created infrastructure at GSU/UHD that support quality education for our constituencies. We are establishing Mathematics and Computer Science Centers at area high schools to educate teachers to integrate the use of technology into their curricula. We recruited quality high school graduates with GPA of 3.5 and higher to major in Mathematics and Computer and Computational Sciences. We have strengthened research in Mathematics and Computer Science at the undergraduate level and several students both presented and published in regional and national forums.

We have established a strong High Ability Program that recruits students from all the fifty states of US and prepares them to major in computer sciences, mathematics, and engineering. Through, a strong recruitment and retention program we have increased the number of majors in mathematics, computer science, and computational sciences. The number of freshman computer science majors at GSU during the Fall 1998 semester is about ninety (90) and most of them have a high school GPA in the range of 3.5-4.0 and ACT scores in the range of 24-31. These achievements are made possible primarily due to the financial support offered through ADSRC funding.

Planned Activities:

Grambling State University is in the process of establishing a committed and innovative partnership with school districts across the United States through the CISCO Networking Academy. CISCO Academy is preparing students for the demands and enormous opportunities of the information economy while creating a qualified talent pool for building and maintaining education networks. Network Academy through a Virtual Schoolhouse grant up to a total of $500,000 annually is setting up laboratories that closely correspond to the real world. Students are getting their hands on the building blocks of today?s global information networks, learning by doing as they design and bring to life local and wide area networks. The CISCO Networking Academy by providing vital technology support and resources is preparing students for the increasingly technology- dependent economy into which they will emerge.

Grambling State University is also developing Louisiana Army National Guard (LANG) Distant Learning project. This is an outreach opportunity geared towards the nontraditional student population. Currently, there are approximately 11,000 LANG personnel who are available for education and training in the science and engineering disciplines. LANG has installed compressed video systems in strategic locations throughout the state of Louisiana. We have funding from the Federal and State governments. GSU is planning to offer degree programs to prepare LANG personnel in the technology, science, engineering, and mathematics disciplines.

 

 

PRE-COLLEGE PROGRAM

Objectives and Significance:

Our efforts at the pre-college level are focused to increase the national visibility in the areas of computer science, mathematics, and the sciences. Because of these efforts, our goals are to improve recruitment of talented students and their retention in these areas.

Accomplishments:

Grambling State University Academic Year Activities

During the Fall 1997 semester and the Spring 1998 semester, we conducted academic activities for area High School students. We visited several high schools within a radius of 40 miles and selected these students. These students visited GSU campus every Tuesdays and Thursdays from 4:15-6:15 p.m. and participated in computer science, mathematics, and physics courses. These students were also given and guided through several ACT preparatory tests. At the GSU ADSRC Center, Dr. Orville Bignall, ADSRC Executive Director, Dr. Parashu Sharma, ADSRC Co-PI, Mr. Ernest Miles, Coordinator of Student Activities, and Ms. Daphne McGhee, ADSRC Administrative Assistant, coordinated the activities for area high schools which participate in the enhancement programs. Both semesters, Dr. Margaret Schaar, Dr. Parashu Sharma, and Dr. Orville Bignall were the instructors for computer science, mathematics, and physics, respectively. Several of these students joined the Mathematics and Computer Science Department during Fall 1998 semester. Thus, this grant has established a pipeline for supply of excellent students in our programs.

Grambling High Ability Program for Pre-Freshman

The High Ability Program is an annual recruiting program where high school juniors have the opportunity to attend Grambling State University as college students before returning to complete their senior year of high school. During the summer session of 1998, sixty students from various high schools across the nation participated in the GSU High Ability Program. These students were offered courses in Computer Science, Mathematics, and English. ADSRC furnished the books and activity fees for those students that expressed an interest in Mathematics and Computer Science. A follow up is planned to continue their interest in these areas of study with the goal of having as many of them matriculate to GSU for tertiary training in mathematics and/or computer science.

 

Army Relevance:

We are providing foundations in mathematics, computer science, and sciences for pre-college students. The primary goal is to establish a pipeline to continually supply highly qualified scientists in the area of Advanced Distributed Simulation Technology. The program stresses the preparation of promising high school students who will eventually participate in the DIS research activities by joining the academic programs at the consortium institutions.

Planned Activities:

We plan to continue these activities during the next year.

 

COLLEGE PROGRAM

 

Objectives and Significance:

Our efforts at the college level are focuses to increase the national visibility in the areas of computer science, mathematics, and the sciences. Because of these efforts, our goals are to improve recruitment, reduce attrition, and develop high level interests at the cutting edge of the field and to encourage a transition to graduate programs.

Accomplishments:

Advising/Mentoring/Research Involvement:

One of the ADSRC students, Angela Lee, was engaged in carrying out research work in the area of nucleate pool boiling heat transfer. An in-depth analysis for both static and dynamic forces, acting on a growing vapor bubble in nucleate pool boiling was done and rigorous expressions were developed for surface tension and buoyancy forces (static), and liquid inertia, bubble inertia, and drag forces (dynamic). Due to this involvement in research, students understood the importance of physics and mathematics in applied engineering problems. They also developed the skills for carrying out advanced level research work, utilizing library resources, and writing computer programs for an applied engineering problem. She participated and/or presented research papers in Louisiana Academy of Sciences, at the Second Annual Undergraduate Research Poster Session on Capitol Hill, and at the Phillip L. Young Seminar.

 

Problem Solving in Mathematics

One of the important focuses in this proposal is to enhance the fundamental skills in various areas of mathematics. We realize that our students have several weak areas of mathematics, therefore, ADSRC students were involved in problem solving where they could enhance their skills in algebra, trigonometry, coordinate geometry, and calculus. The students were required to work in groups and solve these problems as a team. They were required to meet regularly at designated time and place.

Tutorial Services

In order to extend the help to all students of the College of Science and Technology we conducted evening tutorials in Calculus and Differential Equations. Dr. Sharma was the instructor.

Basic Skills Development

Basic Skills Development is a subset of ADSRC that was implemented for those students who did not meet the standard requirements set forth by the ADSRC research grant, but whose GPA could be improved within the year to meet the requirements. These students are using this program to enhance their math and computer programming skills in an attempt to improve their overall knowledge of these subjects. Two ADSRC research assistants, Danica Lewis, and Tarikah Travis supervise this program. They report to the Executive Director.

POST COLLEGE PROGRAM

 

Objectives and Significance:

Our efforts at the post college level are focused to increase the number of students matriculating to graduate programs in mathematics, computer science and engineering and in choosing teaching as a career.

Accomplishments:

One GSU May 1998 graduate, Angela Lee, is currently pursuing her Master?s degree in Computer Science (part time) at Louisiana State University in Shreveport.

Army Relevance:

We are providing our students with linkages to graduate schools. The primary goal is to increase the quantity and the quality of the socially and economically disadvantaged students into graduate programs and in careers in mathematics, science, and engineering professions.

 

 

 

 

 

 

 

 

SECTION 2.

PUBLICATIONS, PRESENTATIONS AND PERSONNEL

2.1 ADS Architecture Design and Evaluation (ADS-ADE) TaskM

Task Coordinator: Ratan K. Guha

 

Key Personnel Involved in the ADS-ADE Task

 

Faculty:

M. Bassiouni, R. Guha, U. Vemulapati (UCF)

M. Balaram, Y. Reddy, J. Alsabbagh, J. Nandigam (GSU)

R. Al󬠍. Beheshti, A. Berrached, A. de Korvin, C. Hu, O. Sirisaengtaksin (UHD)

P. Bobbie, D. Williams, H. Williams, S. Stoecklin, P. Stoecklin, (FAMU)

Graduate Students:

M. Chiu, J. Chu, C. Fang, H. ElAarag, L. Zhang (UCF)

Undergraduate Students:

M. Hung (UHD)

A. Beckus (UCF)

 

 

SUBMITTED AND PUBLISHED PAPERS

SEPTEMBER 29, 1997 THROUGH SEPTEMBER 28, 1998

1. Bassiouni, M. and Fang. C. "Dynamic Channel Allocation for Linear Macrocellular Topology" accepted for publication in the Proceedings of the 1999 ACM Symposium on Applied Computing, February 1999.

2. Chatterjee, S. and Bassiouni, M. "Scalable and efficient broadcasting algorithms for very large internetworks" Journal of Computer Communications, Vol. 21, No. 10, July/August 1998, pp. 912-923.

3. Bassiouni, M.; Chiu, M.; Loper, M. and Garnsey, M. "Relevance filtering for distributed interactive simulation" Journal of Computer Systems Science and Engineering, Volume 13, No. 1, January 1998, pp. 39-47.

4. Chiu, M. and Bassiouni, M. "Predictive Channel Reservation for Mobile Cellular Networks based on GPS Measurements" accepted for publication in the Proceedings of the 1999 IEEE International Conference on Personal Wireless Communications (ICPWC'99), February 1999.

5. J. Chu and R. Guha, "Triangular Level Quorums for Distributed Mutual Exclusion," Journal of Computer Systems Science and Engineering, accepted for publication

6. A. Berrached, M. Beheshti, O. Sirisaengtaksin, and A. de Korvin, "A Hierarchical Grid-Based Approach To Data Distribution In The High-Level Architecture," 2nd International Conference on Non-Linear Problems in Aviation & Aerospace, ICNPAA-98, Daytona Beach, April 1998.

7. A. Berrached, M. Beheshti, O. Sirisaengtaksin, and A. de Korvin, "Approaches to Multicast Group Allocation in HLA Data Distribution Management," Spring 1998 Simulation Interoperability Workshop, SIW'98, Orlando, pp. 1063-1068, March 1998.

8. R. Alo', M. Balaram, A. Berrached, U. B. Vemulapati and D. Williams, "Intelligent Control in Data Distribution Management of HLA", Proceedings of Conference on Simulation Methods and Applications (CSMA), November 1998 (to appear).

9. A. Berrached, M. Beheshti, O. Sirisaengtaksin, and A. de Korvin, "Evaluation of Grid-Based Data Distribution Schemes in the High-level Architecture", 1998 Conference on Simulation Methods and Applications, Orlando, Florida, November 1998. (to appear)

10. A. Berrached, Beheshti, Sirisaengtaksin, and de Korvin, Approaches to Multicast Group Allocation in HLA Data Distribution Management, Spring 1998 Simulation Interoperability Workshop, SIW'98, Orlando, pp. 1063-1068, March 1998.

11. Williams, D., Stoecklin, S., and Stoecklin, P., "Tailoring the Process Model for Maintenance and Reengineering," Proceedings of the Second Euromicro Conference on Software Maintenance and Reengineering, Florence, Italy, March 8-11, 1998.

12. M. Hung and C. Hu, "An Online Interval Calculator", Proceedings of 1998 Conference on Simulation Methods and Applications, to appear, Orlando, Florida, 1998.

13 C. Hu, A. Cardenas, S. Hoogendoorn and P. Selpulveda, "An Interval Polynomial Interpolation Problem and its Lagrange Solution", J. Reliable Computing, Vol. 4, No. 1, pp. 27-38, 1998.

 

2.2 ADS Visualization and Synthetic Environment (ADS-VSE) Task

Task Coordinator: Patrick Bobbie

 

Key Personnel Involved in the ADS-VSE Task

 

Faculty:

P. Bobbie, H. Williams, C. Allen (FAMU)

P. Sharma, O. Bignall (GSU)

O. Sirisaengtaksin (UHD)

R. Guha (UCF)

Graduate Students:

R. Giroux (FAMU)

J. Yang (UCF)

Undergraduate Students:

M. Arradondo, C. Birmingham, A. Byrd, J. Derival, T. Hickson, T. McDaniel, K. McHenry,

D. Moloye and S. Roper (FAMU)

A. Beckus (UCF)

SUBMITTED AND PUBLISHED PAPERS

SEPTEMBER 29, 1997 THROUGH SEPTEMBER 28, 1998

1. Arradondo, C. Birmingham, and P. Bobbie, " Computational Geometry Tutorial," Proceedings of the ADMI98 Conference, Houston, TX, June 26-27, 1998.

2. P. Bobbie, "Computational Geometry and Synthetic Environments," Proceedings of the Workshop on Distributed Simulation, Artificial Intelligence and Graphics/Virtual Environments, Mexico City, MX, Feb. 22-23, 1998, pp. 151-155.

3. P. Bobbie, H. Williams, R. Giroux, S. Roper, C. Birmingham, and M. Arradondo, "Distributed Computation and Visualization of the Ising Model in MPI Environment," Workshop on Intelligent Virtual Environments, Sept. 11-12, 1998, LANIA, Xalapa, MX (appearing).

4. P. Sharma, O. Bignall, H. Williams, O. Sirisaengtaksin, and R. Guha, "Physics-Based Models of Fire Simulation: A Survey", 1998 Conference on Simulation Methods and Applications to be held at Orlando, FL, November 1-3, 1998, (to appear).

5. P. Sharma, "Determination of Heat Transfer Rates in Nucleate Pool Boiling of Pure Liquids for a Wide Range of Pressure and Heat Flux", 11th International Heat Transfer Conference held in Kyongju, Korea, August 23-31, 1998.

6. P. Sharma, A. Lee, and K. Gayden, "Comparison of Two Periods in Bubble Emission Frequency in Nucleate Pool Boiling of Pure Liquids", 72nd Annual Meeting of the Louisiana Academy of Sciences, Southeastern Louisiana University, Hammond, Louisiana, February 6, 1998.

7. K. Gayden, A. Lee and P. R. Sharma, "Contribution of Static and Dynamic Forces in Determination of Bubble Departure Diameters in Nucleate Boiling", The Second Annual Undergraduate Research Poster Session on Capitol Hill organized by Council on Undergraduate Research (CUR), Dirksen Senate Office Building, Washington D. C., April 21, 1998.

8. A. Lee, P. Sharma, and K. Gayden, "Analytical Determination of Bubble Emission Frequency in Nucleate Pool Boiling of Pure Liquids", 8th Annual Phillip L. Young Research Symposium, Grambling State University, LA, April 30, 1998.

     

  1. K. Gayden, A. Lee, and P. Sharma, "Analytical Expression for Bubble Departure Diameters in Nucleate Boiling", 8th Annual Phillip L. Young Research Symposium, Grambling State University, LA, April 30, 1998.

     

     

  2. C. Wallace and R. K. Guha, "An Extended Percolation Model for Fire Propagation Simulation", 1998 Conference on Simulation Methods and Applications to be held at Orlando, FL, November 1-3, 1998, (to appear).

     

 

2.3 ADS Knowledge Based Systems (ADS-KBS) Task

Task Coordinator: Richard Al󼯰>

 

Key Personnel Involved in the ADS-KBS Task

 

Faculty:

R. Al󬠍. Beheshti, A. Berrached, A. de Korvin, C. Hu, O. Sirisaengtaksin (UHD)

M. Bassiouni, R. Guha (UCF)

C. Allen, P. Bobbie, D. Williams, H. Williams (FAMU)

Graduate Students:

Eric Vick (UCF)

Undergraduate Students:

G. Kutiev (UHD)

Other

R. Kleyle (IUPU)

 

SUBMITTED AND PUBLISHED PAPERS

SEPTEMBER 29, 1997 THROUGH SEPTEMBER 28, 1998

     

  1. A. de Korvin, O. Sirisaengtaksin and G. Kutiev, "Neural Network With Imprecise and Conflicting Data", Advances in Industrial Engineering Applications and Practice Vol. 2 (1997) San Diego, 1085-1090.

     

     

  2. A. de Korvin, S. Hashemi and L. Chaouat., "The Use of The Theory of Evidence To Tune Fuzzy Rules", Advances in Industrial Engineering Applications and Practice Vol. 1 (1997) San Diego, 57-62.

     

     

  3. A. de Korvin, M. Beheshti, A. Berrached, C. Hu, and O. Sirisaengtaksin, "On Interval Weighted Three-Layer Neural Networks", The 31st Annual Simulation Symposium April 5-9 (1998) 188-194 Boston, MA.

     

     

  4. R. Alo, K. Alo, MD, A. de Korvin, and V. Kreinovich, "Spinal Chord Simulation For Chronic Pain Management: Towards An Expert System", Applications of Advanced Information Technologies, A World Congress On Expert Systems March 16-20 (1998) 156-164 Mexico City.

     

     

  5. M. Beheshti, A. Berrached, C. Hu, and O. Sirisaengtaksin, "Fuzzy Petri Nets For Decision Making In Uncertain and Variable Environment", J. of Neural Parallel and Scientific Computations 5 (1997) 309-324.

     

     

  6. A. de Korvin, R. Kleyle, "Transition Probabilities For Markov Chains Having Fuzzy States",. J. of Stochastic Analysis and Applications. Vol. 15 (4) (1997) 527-546.

     

     

  7. A. de Korvin, R. Kleyle, "Expected Transition Cost Based On A Markov Model Having Fuzzy States With An Application To Policy Selection", J. of Stochastic Analysis and Applications. Vol. 16 (1) (1998) 51-64.

     

     

  8. A. de Korvin, R. Kleyle, "Using The Extension Theorem To Define Fuzzy Expectations",. J. of Stochastic Analysis and Applications. Vol. 16 (2) (1998) 303-310.

     

     

  9. R. Alo, A. de Korvin, Hashemi, Quirchmyer, "Finding the Dominant Rule When the Antecedent and Consequent are Fuzzy Sets of Type II and the Inputs are Crisp or Fuzzy", Proceedings of the Xalapa Conference on Virtual Environments and Simulation (to appear)

     

     

  10. R. Alo, A. de Korvin, R. Kleyle, "Fuzzy Stochastic Petri Nets", Journal of Fuzzy and Intelligent Systems. (to appear)

     

     

  11. A. de Korvin, M. Shipley and K. Omer,. BIFPET Versus Pert; Fuzzy Probability Instead Of The Beta Distribution, J. Of Engineering and Technology Management Vol. 14 (1997) 49-65.

     

     

  12. A. de Korvin, R. Kleyle, P. Siegel and D. Deeter-Schmetz. Reducing The Uncertainty In New Product Development: A Fuzzy Set Approach, Allied Academics International Conference, Proceedings of the Academy of Marketing Studies Vol. 2, No 2, (1997) October 14-17, 1997, Maui, Hawaii, 20-27.

     

     

  13. A. de Korvin, R. Alo, R. Guha, C. Allen and D. Williams, "Comparing Values Arising From Imprecise Information", Proceedings of 1998 Conference on Simulation Methods and Applications, Orlando (to appear)

     

     

  14. R. Alo, A. de Korvin, R. Kleyle, Extending Jain?s Maximization Principle to Fuzzy Sets with Continuous Support, Proceedings of Ibermania 1998, Lisbon (to apperar)

     

     

  15. M. Beheshti, A. Berrached, O. Sirisaengtaksin, D. Smith And G. Kutiev, Autonomous Navigation Systems With Hierarchical Fuzzy Controllers, Second International Conference On Non-Linear Problems In Aviation & Aerospace, Daytona Beach, Florida, April 29-May 1, 1998

     

     

  16. M. Beheshti, A. Berrached, O. Sirisaengtaksin, and A. de Korvin, Optimizing Concurrency Control in OODBS Using Fuzzy Sets, Expert Systems Applications & Artificial Intelligence, EXPERSYS-98. (to appear)

     

     

  17. Hu, Berrashed, de Korvin, and Sirisaengtaksin, "Computer Security Model under Uncertain/Partial Information", Information Processing and Management under Uncertainty in Knowledge Systems, IPMU 98, pp. 1840-1845, 1998.

     

 

 

2.4 Student Outreach, Curriculum Enhancements and Training Program Task

Task Coordinator: Muddapu Balaram

Key Personnel Involved In The Task

 

Faculty:

UHD:

Richard A. Al󠭠Executive Director, CCSDS and Grants and Contracts

Sangeeta Gad - Director of Recruitment and Retention

Linda Becerra - Director for Advising and Mentoring

Linda Crow - Director for Assessment

Mitsue Nakamura, Vien Nguyen, Eric Wildman - Tutors

Yash Gad, Ren頇arcia, Ash Rehman, Robert Shankin, Debra Toliver, Ana Simmons, Emmanuel Usen, Shishen Xie - Instructors

Houston ISD, Aldine ISD, Galena Park ISD - School districts who provide inkind teachers and bus transportation for middle and high school students.

GSU:

P. Sharma, Co-Principal Investigator

O. Bignall, ADSRC Executive Director

E. Miles, Jr., ADSRC/NAVO/ARL Student Activities Coordinator

M. Balaram, Lead Principal Investigator

Daphne McGhee, Administrative Assistant

Undergraduate Students:

UHD:

Olga Beiza, Obinna Ilochonwu, Maria Mata, Ren頇arc Antonio Ruiz, Veronica Sanchez, Mohammad Farhad, Jesus Azcarraga, Mar Azcarraga, Nathalie Garcia, Armando Guerra, Howard Pierson, Marvelia Rocha, Ana Rocha, Chau Hoang, John Syers, Randy Robinson, Miriam Morales, Maria Cazares, Derek Smith, Steven Tucker, Mark Carpenter, Fletcher Etheridge, Javed Mohammed, Raymond Ramirez, Craig Trauschke, Atif Zamir

 

GSU:

Terrence Bailey, McArthur Billing, Thomas George, Cornelius Coleman, Bryan Briscoe, Earnest Cooper, Justin Cooper, Tifarah Dial, Howard Fisher, Peggy Henley, Rhonda Humes, Sonya Johnson, Kendra Johnson, Asheya Mcalister, Lylanda Miller, Damione Moore, Brandy Reliford, Regina Ratcliff, Reginald Ratcliff, Crystal Rugege, Michael Shaw, Tarikah Tarvis, Cutina Wade, Jacqueline Wall, Yolanda Wall, Angela Lee, Danica Lewis, Daniel Williams, David Williams, Glenda White, Kimberly Williams,

PRESENTATIONS OF THE OUTREACH, CURRICULUM ENHANCEMENTS AND TRAINING PROGRAM TASK

SEPTEMBER 29, 1997 THROUGH SEPTEMBER 28, 1998

 

UHD:

Leadership

In addition to research exposure, students are being given opportunities to excel in leadership.

     

  • Ren頇arcia, President, Student Chapter of the Association for Computing Machinery (ACM) at UHD. January 1997-Present.

     

     

  • ACM, led by Ren頇arcia, is complimenting the CS curriculum by providing numerous tutoring sessions in CS and upper-level Math courses. Plus, ACM provides numerous hands-on technical workshops (i.e. HTML, Java, Unix, Linux, NT). These tutoring and workshop sessions are conducted within the CS Laboratories and Learning Center.

     

     

  • Science Engineering Fair of Houston (SEFH) - Ren頇arcled the volunteer efforts for this event. CCSDS provided technical assistance. March 26-28, 1998. Houston, TX.

     

     

  • Local ACM Programming Contest - Ren頇arcand Antonio Ruiz organized the event. Eight teams competed, consisting of the following students: Obinna Ilochonwu, Mohammad Farhad, Chau Hoang, John Syers, Steven Tucker, Craig Trauschke, and Atif Zamir, Jason Rodgers, George Wang, Thy Hoang, Richard Pedersen, Ronald McGuire, Kashif Jabbar, Chance Casey, Ngat Nguyen, Mohammed Ahmed, Sabina Koshy, Nebal Radwan, Pat Jeneka. May 1, 1998. Houston, TX. Partial funding from CCSDS.

     

     

  • 1st Annual High School Programming Contest - Ren頇arcand Antonio Ruiz organized the event. Twelve local Houston high schools competed. May 1, 1998. Houston, TX. Partial funding from CCSDS.

     

     

  • SC-CoSMIC Planning Session - Obinna Ilochonwu, Maria Mata, Nathalie Garcia, Ren頇arc Jesus Azcarraga, and Mar Azcarraga. June 25, 1998. Houston, TX.

     

 

Conference Participation

South Central - Computational Science at Minority Institutions Consortium (SC-CoSMIC); June 25, 1998; Houston, TX

     

  • Ren頇arcia's abstract was accepted to be presented within the "Strength and Weaknesses of Minority and Majority Institutions" Panel Session at the SC-CoSMIC Conference.

     

     

  • Obinna Ilochonwu, Maria Mata, Nathalie Garcia, Ren頇arc Jesus Azcarraga, and Mar Azcarraga participated within the SC-CoSMIC Planning Session, held February 19, 1998 in Houston, TX, which involved approximately twenty-five students from UHD, Rice University, and Prairie View A&M University.

     

     

  • Obinna Ilochonwu, Maria Mata, Nathalie Garcia, Ren頇arc Jesus Azcarraga, John Syers, Antonio Ruiz, and Mar Azcarraga attended SC-CoSMIC through funds from ARL and ADSRC.

     

Association of Departments of Computer and Information Science and Engineering at Minority Institutions (ADMI); June 26-28, 1998; Houston, TX

     

  • Jesus Azcarraga, Armando Guerra, Mario Barrientos, Howard Pierson?s papers were accepted and presented at the ADMI Conference. "Relevance Filtering in Distributed Simulation Systems" by Jesus Azcarraga. "Database Management System Implementation of the Task Process Description" by Armando Guerra, Mario Barrientos, Howard Pierson.

     

     

  • Obinna Ilochonwu, Maria Mata, Nathalie Garcia, Ren頇arc Jesus Azcarraga, John Syers, Antonio Ruiz, and Mar Azcarraga, Ana Rocha, Marvelia Rocha, Olga Beiza attended ADMI through funds from ARL and ADSRC. The students also led high school students (from the Houston PREP Program) through the conference.

     

Hispanic Engineer National Achievement Awards Conference (HENAAC); October 8-10, 1998; Houston, TX

     

  • During June - October, 1998, Ren頇arcia worked in conjunction with NASA Johnson Space Center with the development of the PreCollege Program, more specifically in seeking speakers and coordinating the program schedule.

     

     

  • During June - October, 1998, Ren頇arcia, Antonio Ruiz, Veronica Sanchez, and Marvelia Rocha also assisted increasing student interest in the conference.

     

     

  • At the conference, Ren頇arcia spoke on behalf of the CCSDS and UHD. Maria Mata spoke on behalf of ADSRC and her experiences as a research assistant.

     

 

Outreach Programs

The following programs have been developed to attract more students.

? A CS Academy has been established for talented upper level high school students. This Academy, which began operations in Spring of 1996, consists of university-credited classes in beginning C++ and Pre-calculus to be offered to selected high school juniors and seniors allowing students to begin college with CSII and Calculus. 15 students enrolled in FY98. Supplemental funds for this program has been obtained from NASA Headquarters and the NSF National Computational Science Alliance.

? Saturday PREP (now Computational Science Academy) for 8 and 9 grade students; 58 students enrolled in FY98. Material for this program, which initially began in 1993, was developed through funds from NASA. Problem solving and conceptualization are key elements of this program. Supplemental funds for this program has been obtained from NASA Headquarters and the NSF National Computational Science Alliance.

? Houston PREP program for middle and high school students completed its 10th year in the summer of 1998. It offers seven weeks of classes for students in grades 7 through 11, particularly those from under-represented populations, in problem-solving, logic, computer science, engineering, physics, probability, statistics, science and technical writing. 200 students enrolled in Summer 1998. Also, this was the second year that a Fourth Year extension was added to the program with funds from the U.S. Army Research Office. Supplemental funds, to integrate science into the entire PREP Curriculum, have been obtained from NASA Headquarters.

? The Center?s Learning Center has been made available for use by majors in CS as an Open Lab for classroom assignments. The Center provided free tutoring for CS and upper-level Math students within the Learning Center. The Learning Center consists of 10 Pentium machines networked to the same existing software used in the classrooms for CS and upper-level Math.

 

GSU:

Leadership

Similarly, the ADSRC student assistants are encouraged to take an active role and participate in various campus organizations that promote mathematics and computer science.

    • ACM - Association for Computing Machinery

Under the leadership of the officers including Vice-President Jacqueline Wall (ADSRC research assistant), ACM is currently submitting a proposal, to IBM, to ensure the use of laptops for all ACM members at Grambling State University.

    • PME - Pi Mu Epsilon "National Mathematics Honor Society"

President Tarikah Travis and Vice-President Danica Lewis (ADSRC research assistants) are in the process of rebuilding the organization. At this time, they are planning the First Annual Math Competition for High Schools in Northern Louisiana.

 

Conference Participation

Several ADSRC students? papers were accepted for presentation in the 72nd Annual Meeting of the Louisiana Academy of Sciences annual meeting held at the Southeastern Louisiana University in Hammond, LA., November 20-21, 1997. In addition, one ADSRC research assistant (as a contributing author) participated in the Second Annual Undergraduate Research Poster Session on Capitol Hill organized by the Council on Undergraduate Research (CUR), in Washington, D.C., April 21, 1998.

The PI?s at Grambling State University also designed and supervised over ten group projects that involved the development of programs and algorithms that the students would not normally be exposed to in the regular class settings.

 

 

TRAINING PROGRAM PARTICIPATION

SEPTEMBER 29, 1997 THROUGH SEPTEMBER 28, 1998

UHD:

Fiscal Year 1998 Research Projects

Data Distribution Management in Distributed Systems ? J. Azcarraga and A. Zamir

Neural Networks ? O. Ilochonwu

Performance and Reliability Analysis of Relevance Filtering for Scalable Distributed Interactive Simulation - J. Azcarraga

Database Implementation of the Task Process Description ? Barrientos, Guerra and Pierson

Flame Modeling ? Trauschke, Ilochonwu, and Smith

Extending Jain?s Maximization Principle to Fuzzy Sets with Continuous Support ? M. Farhad

Janus War Simulation and An Alternate Study in Java ? S. Tucker

Testing Sift Using Janus - M. Mata

 

Summer 1998 Internships

     

  1. J. Azcarraga and R. Azcarraga worked on web projects for the National Computational Science Alliance grants.

     

     

  2. R. Ramirez, A. Guerra, C. Trauschke, and J. Mohammed obtained high paying positions at employers within the Houston Area.

     

     

  3. A. Zamir interned at Schlumberger Gecobrakla Branch, Houston, TX.

     

     

  4. O. Ilochonwu and H. Pierson continued their research work from the Spring Semesters.

     

     

  5. J. Azcarraga, R. Azcarraga, O. Beiza, M. Mata, R. Robinson, C. Rodgers, H. Pierson, A. Rocha, M. Morales, M. Cazares, Internships with Houston PREP Summer Program; Houston, TX.

     

 

Senior Research Theses

For 1997-98 academic year, 10 senior research projects have been completed:

     

  1. Data Base Management System ? Barrientos

     

     

  2. Numerical Solution of Biharmonic Equation ? Ramos

     

     

  3. Economic Stability by Analysis of Variance ? Sullivan

     

     

  4. Neural Network Algorithm ? Badshah

     

     

  5. Object Orientation in Set Operation ? Chou

     

     

  6. Conversion/Calculation Program ? M. Lee

     

     

  7. WWW Database ? George

     

     

  8. Gaming Theory ? Grantham

     

     

  9. Network Topologies - Magallon

     

     

  10. Designing Network Protocols Using X-kernel - Chaudhry

     

 

 

 

Scholarship Program

The following undergraduates have been awarded scholarships in Fall 1997, Spring 1998, and/or Fall 1998: Mar Azcarraga; Vanessa Patino, Antonio Ruiz; Benjamin Somosa, Atif Zamir; Goonasekera, Pasan; Hoang, Chau; Vertucci, Steven; Amare, Desta; Banda, Rogelio; Thakore, Urvi; Ramirez, Joe; Haman, Ijasini; Pham, Chinh; McFarland, Gwendolyn; Rivera, Gwen; Corleto, Yesenia; Beery, Elizabeth; Pham, Loan; Le, Anh Minh; Hernandez, Juan; Rice, Cedric; White, James; Banks Jr., Billy; George, Anoop; Foster, Abram

 

 

 

 

 

SECTION 3.

VISITS BY ADSRC RESEARCHERS

VISITS BY ADSRC RESEARCHERS AND STUDENTS

SEPTEMBER 29, 1997 THROUGH SEPTEMBER 28, 1998

 

 

1. UNAM, Virtual Environments Group, Mexico City, MX, Feb. 22-23, 1998.

2. LANIA, AI & Virtual Reality Group, Xalapa, MX, Sept. 11-12, 1998.

3. UCF, ADSRC Group Meeting, Orlando, FL, July 1998.

Visited California Institute of Technology (Cal Tech) Pasadena, to use the library facilities, from August 4-14, 1998.

72nd Annual Meeting of Louisiana Academy of Sciences, February 6, 1998, Southeastern Louisiana University, Hammond, LA

The Second Annual Undergraduate Research Poster Session on Capitol Hill organized by Council on Undergraduate Research (CUR), Dirksen Senate Office Building, Washington D. C., April 21, 1998.

Visited Washington DC and met with several United States Senators, Congressman, and their staff. Delivered the message to the members of congress and their staff that research projects such as Advanced Distributed Simulation Research Consortium is very beneficial to the students, faculty, and the institution. During this visit we met following Senators and Congressman:

Honorable Senator Mary Landrieu of Louisiana, Honorable Senator Patty Murray of Washington, and Honorable Congressman Kenneth F. Bentsen, Jr. form 25th District in Texas. In addition to these senators and congressman we met with their staff and conveyed in detail the impact of such funding on undergraduate research and education.

Eighth Annual Phillip L. Young Research Symposium, April 30, 1998, Grambling State University, Grambling, LA.

 

 

 

 

 

 

 

 

 

 

 

Appendix

PUBLICATION ABSTRACTS

 

 

 

Dynamic Channel Allocation for

Linear Macrocelluar Topology

Mostafa A. Bassiouni and Chun-Chin Fang

School of Computer Science

University of Central Florida

Orlando, FL 32816

Abstract

The wide deployment of real-time services in third generation wireless networks will require handover designs that can simultaneously reduce the blocking probability of handoff requests and decrease the handoff delay. In this paper, we present a dynamic channel assignment scheme for highway macrocellular networks that is suitable for real-time services in terms of execution overhead and efficient in terms of reducing call blocking. The scheme can be used in a large segment of global highways, namely, linear macrocells in which the radio channels used in a given cell cannot be simultaneously used in the two neighboring cells to its left and to its right. The execution time of the scheme per handoff request is of O(1) complexity, the number of transmitted messages per request is small, and the space overhead is also O(1). By using a non-compact initial assignment of nominal channels to neighboring cells, the scheme greatly simplifies the channel selection process and avoids the expensive computation and message exchanges typically needed by dynamic channel allocation schemes. Performance simulation results show that the scheme achieves low blocking probability and is therefore suitable for real-time connections in highway cellular networks.

 

 

 

 

Predictive Channel Reservation for Mobile

Cellular Networks based on GPS Measurements

Ming-Hsing Chiu and Mostafa A. Bassiouni

 

School of Computer Science

University of Central Florida

Orlando, FL 32816

Abstract

An important consideration in the design of mobile cellular networks is to prevent the frequent blocking of ongoing real-time connections during handoffs. In this paper, we propose and evaluate the performance of a predictive channel reservation protocol based on extrapolating the movement of the mobile. Using GPS and dead-reckoning, each mobile can trace its path and report its movement to the current base station. This base station extrapolates the path of the mobile to determine the neighboring cell that the mobile is currently heading to. When the mobile is within a certain distance from a neighboring cell, the current base station informs the neighboring base station in order to pre-allocate a channel for the expected handoff requests. Cancellation of reservation is also sent if the mobile changes its direction and moves away from the neighboring cell. Detailed simulation tests were used to examine the performance of this protocol and compare it with other schemes (e.g., Guard Channel). Some of the parameters used in the tests include: the interval between successive GPS measurements, the degree of randomness of the mobile motion, call arrival rates and call holding times, the threshold distance that triggers sending reservations, etc. In some cases, false reservations (due to frequent changes in motion) adversely affected the performance of the predictive scheme. Partial remedy of this problem was obtained by using the reserved channels to serve incoming handoff requests on a FIFO basis (rather than dedicating each channel to the mobile that reserved it). The paper presents the results of the extensive performance tests used to evaluate the predictive channel reservation scheme and discusses the insight gained from these performance tests.

 

Scalable and Efficient Broadcasting Algorithms for Very large Internetworks

 

Samir Chatterjee

Mostafa A. Bassiouni

Georgia State University

University of Central Florida

 

 

Abstract

Broadcast is a special case of routing in which a packet is to be delivered to a set that includes all the network nodes. While dynamic and distributed broadcast techniques have been proposed and used in the internet, unfortunately they suffer from scalability problems, i.e., they are not efficient with respect to the tremendous size of today's networks. Moreover, it has been observed that the cost of routing and the broadcast time are two conflicting performance measures as far as optimization is concerned, especially in large networks. Also, many of the current techniques are not robust enough and give low performance under events of link failure. First, we show that in order to achieve universal scalability, internets have naturally acquired a multi-level hierarchical structure. Second, utilizing this existing hierarchy, we propose scalable broadcasting protocols which achieve near-optimal cost and time measures. Because of the hierarchy, our proposed algorithm only maintains information of links connected to direct neighbors, thereby making it scalable to future growth in size of the network. We show that time-optimal broadcast in point-to-point networks can be achieved by formulating the problem as finding maximum matching in bipartite graphs. Several heuristics based on matching are presented. Performance bounds are derived along with numerical and simulation results obtained that prove the validity and feasibility of the Scheme.

㼯font> 1998 Elsevier Science.

Relevance Filtering for

Distributed Interactive Simulation

Mostafa Bassiouni Department of Computer Science

University of Central Florida, Orlando

Ming-Hsing Chiu Department of Computer Science

University of Central Florida, Orlando

Margaret Loper Georgia Tech Research Institute

Georgia Institute of Technology

Michael Garnsey STRICOM

Research Parkway, Orlando

 

Abstract

In this paper, we present the results of our work to design and evaluate relevance filtering schemes for distributed interactive simulation (DIS) systems. Relevance filtering is a technique that can effectively reduce the traffic on long-haul links of simulation networks and improve the scalability of DIS systems. Detailed algorithms suitable for the implementation of distributed data filtering in the gateways of DIS networks are presented. Both filtering-at-transmission and filtering-at-reception are explained. Methods to solve the problem of inaccurate state information caused by the distributed nature of data filtering are presented and evaluated.

 

Triangular Level Quorums for Distributed Mutual Exclusion

Jenn-Luen Chu and Ratan K. Guha

School of Computer Science

University of Central Florida

Orlando, Florida

Abstract

Using quorum to synchronize access to ashared resource in adistributed system is attractive because it reduces the number of messages required to be exchanged for a node to enter its critical section. This paper introduces a new approach, called triangular level coterie, which generates relatively small and equal-sized quorums. The properties of triangular level coteries, such as non-domination, corresponding vote assignments, quorum availability, complementary property, and convergence property, are studied. The quorum availability properties for other coteries are also studied.

 

An Extended Percolation Model For Fire Propagation Simulation

Chris Wallace and Ratan K. Guha

School of Computer Science

University of Central Florida

Orlando, FL 32816

Abstract

The modeling of fire in a real-time simulation requires the solutions of several problems, one of which is the accurate modeling of its propagation through the virtual environment. In a dynamic environment this requires modeling a variety of environmental factors (e.g. wind, the fuel source, topology, moisture levels, etc.). Each of these factors will influence the behavior of a fire and may affect the level of influence projected by one or all of the other factors. The percolation approach has a well-documented history in modeling basic fire propagation behaviors through random mediums with the added effects of wind. We have extended these basic models to include topology, non-random mediums, moisture content and effects for both ground propagation and tree canopy propagation. This paper examines the use of a percolation approach to modeling the spread of flame through the open environment and present a web-page based applet, implemented in JAVA.

 

Distributed Computation and Real-Time Visualization of the Ising Model in MPI

Environment

Patrick Bobbie, H. Williams, Rupert Giroux, Shawn Roper,

C. W. Birmingham, and M. Arradondo

ADSRC

313 Bannerker Technology Building 'A'

Florida A&M University Tallahassee, Florida 32307

http://www.adsrc.famu.edu

Abstract

The goal of the reported project is to utilize the two-dimensional Ising model to represent and simulate the dynamics and behavior of fire and smoke. We consider the elements of fire and smoke as a system of particles which undergo phase transitions, energy-level changes, and exhibit positional changes to accurately represent real-world phenomenon. The Ising model accomplishes this task in two ways. First, the particles are arranged in a lattice and assigned energy levels. Second, the interaction of the particles is analyzed based upon the energy levels. By using two or three dimensional lattices and studying the interactions of a cell with its neighboring cells, a model of a statistical mechanical system with phase transitions is possible. However, real-time simulations of large systems with large two dimensional or three dimensional lattices require significant processing power. Using a distributed computing environment with a Message Passing Interface (MPI) system can reduce the time required to process large lattices mathematically and graphically. A parallel version of the Ising algorithm has been implemented and currently runs in the MPI environment on a cluster of SGI workstations. Currently, the OpenGL API environment is being used to render and visualize the simulation to understand the inherent phenomenology.

# This work has been supported by a grant from ARO (ADSRC DAAH04-95-10250)

 

Computational Geometry and Synthetic Environments*

P. O. Bobbie

ADSRC

313 Benjamin Bannerker Technical Bldg. 'A'

Florida A&M University

Tallahassee, Florida 32307

http://www.adsrc.famu.edu

 

Abstract

The primary purpose of the ADSRC effort is to develop a military wargaming environment that uses the latest tools in computerized visualization and network technology. We are primarily concerned with the graphic and behavioral aspects of battlefield simulation process. To achieve the goals of the project, we have contributed in developing various tools and techniques for improving the current state of the art of battlefield simulation. Specifically we have focused on open standards, physical modeling, behavioral modeling, specialty software, and computational geometry. Of these, Computational Geometry forms the basis for the appearance and interaction of components in a three dimensional synthetic environment. We have developed a learning tool called the Computational Geometry Tutorial, CGT, to aid in explaining several 2D and 3D graphic constructs. The CGT demonstrates some of the basic attributes/functions of 3D geometry. These functions provide an insight into the operations of the 3D graphics engine used in virtual world simulations. The CGT is not only a demonstration of mathematical simulation but also a teaching tool for the next generation of researchers. Based in HTML, VRML, and Java, the CGT is easily viewable by anyone, anywhere using a compliant client

browser.

Computational Geometry Tutorial**

M. R. Arradondo, C. W. Birmingham, P. O. Bobbie

Advanced Distributed Simulation Research Consortium

313 Benjamin Banneker Technical Building A

Florida Agricultural and Mechanical University

Tallahassee, Florida

http://adsrc.famu.edu

Abstract

The Computational Geometry Tutorial Project (herein referred to as CGT) began as a need for a web based introductory guide to the basic concepts of computational geometry and computer graphics. This need arose as a part of an ongoing Advanced Distributed Simulation Research Consortium (ADSRC) project. The tutorial is based in HTML, VRML, and Java. Thus, the CGT can be readily reviewed by anyone, anywhere using a compliant client web browser. The current version of the CGT features Java applets demonstrating various geometric constructs and VRML demonstrating the capability of three dimensional (3D) graphics.

 

 

A Development Environment for Creating Mobile Spoken Language Systems

Clement Allen, Patrick Bobbie

Advanced Distributed Simulation Research Consortium

313 Benjamin Banneker Technical Building A

Florida Agricultural and Mechanical University

Tallahassee, Florida

Abstract

A spoken dialogue system allows a user to interact with a computer application using conversational speech. Current spoken dialogue systems tend to focus on single user dialogues: the system interacts with one user, not multiple users. Furthermore, development environments for creating spoken dialogue systems also assume single-user interaction. This paper discusses our methodology for using a distributed computing environment to support multi-user spoken language applications. We have designed a visual programming environment that allows a developer to create a spoken language applications that may include multiple users.

Tailoring the Process Model for Maintenance and Reengineering

D. Williams, S. Stoecklin, and P. Stoecklin

Abstract

Emerging technology, aging software, and an influx of new software requirements cause many organizations to be inundated with maintenance projects. New software engineered using a mature process model, theoretically, is maintainable with the same process mode. Legacy software, not engineered using a process model, is maintained using various approaches. This article introduces a practical maintenance approach, called CONFIGURATION IMPACT ANALYSIS, for rapidly planning and performing maintenance of both legacy and engineered software. This plausible maintenance approach produces a plan consistent with a mature development process model. The intended use of this approach is to maintain software without degrading its quality.

 

 

Physics-Based Models Of Fire Simulation: A Survey*

 

Parashu R. Sharma and Orville N. Bignall

Department of Mathematics and Computer Science

Grambling State University

Grambling, Louisiana 71245

Henry L. Williams

Department of Mathematics

Florida A & M University

Tallahassee, Florida 32307

Ongard Sirisaengtaksin

Department of Computer Science

University of Houston-Downtown

Houston, Texas 77002

Ratan K. Guha

Department of Computer Science

University of Central Florida

Orlando, Florida 32816

 

Abstract

The modeling of fire and its related phenomena have many implications for public safety, defense applications, and academic interests. This type of modeling and the resulting simulation, understandably is a complex undertaking because of all the academic disciplines that are relevant and must be applied to the development of a comprehensive model. This work is a compendium of the field of fire and smoke modeling and serves a report on the current physics-based modeling in this important area of research.

* Work supported Army Research Office, DoD NAVO/PET Program and National Science Foundation

 

Determination of Heat Transfer Rates in Nucleate Pool Boiling of Pure Liquids for a Wide Range of Pressure and Heat Flux*

Parashu R. Sharma

Department of Mathematics and Computer Science

Grambling State University

P. O. Box 1191

Grambling, LA 71245

U. S. A.

 

Abstract

This investigation pertains to analytical determination of heat transfer coefficients for saturated nucleate pool boiling of pure liquids over a wide range of pressure and heat flux. A detailed analysis for both static and dynamic forces, acting on a growing vapor bubble, was done. Expressions were developed for surface tension and buoyancy forces (static), and liquid inertia, bubble inertia, and drag forces (dynamic). These expressions were utilized to obtain equations for departure diameter and bubble emission frequency. These equations were then used in the correlation earlier developed by Bl? (1986) to calculate heat transfer coefficients. The magnitudes of both static and dynamic forces, bubble departure diameters, bubble emission frequency, and heat transfer coefficients were calculated for distilled water, hydrocarbons, and refrigerants boiling on differing surfaces over a wide range of heat flux and pressure. These calculations show the effect of heat flux and pressure on these forces, bubble departure diameter, and frequency and aids in better understanding of the boiling process. The calculations of the forces helps explain why the heat transfer coefficient increases or decreases with increasing or decreasing values of heat flux and pressure. The deviation between the calculated and the experimental heat transfer coefficients is within * Work supported Army Research Office and Office of Naval Research

Comparison of Two Periods in Bubble Emission Frequency in Nucleate Pool Boiling of Pure Liquids.*

P. R. Sharma, A. Lee, and K. Gayden.

Department of Mathematics and Computer Science

Grambling State University

P. O. Box 1191

Grambling, LA 71245

U. S. A.

Abstract

Bubble emission frequency (f) in nucleate pool boiling heat transfer is defined as the reciprocal of the sum of two periods: the growth period (tg) and the waiting period (tw); f = 1/( tg + tw). Both these periods are function of heat flux, pressure, boiling liquids, and the heat transfer surface on which boiling occurs. It is important to investigate the relative importance of these two periods over a wide range of operating variables. In this investigation expressions for these two periods have been developed and both these periods have been calculated for hydrocarbons, refrigerants, and distilled water boiling on heat transfer surfaces made of copper, brass, and stainless steel for a wide range of heat flux and pressure. The study establishes qualitative and quantitative relationship between both these periods and heat flux and pressure for boiling of these liquids. It also reveals impact of the magnitude of these periods on the value of heat transfer coefficients.

* Work supported Army Research Office and Office of Naval Research

Contribution of Static and Dynamic Forces in Determination of Bubble Departure Diameters in Nucleate Pool Boiling*

 

Kizuwanda Gayden, Angela A. Lee, and Parashu R. Sharma

Department of Mathematics and Computer Science

Grambling State University

P. O. Box 1191

Grambling, LA 71245

U. S. A.

Abstract

This investigation pertains to an analysis of both static and dynamic forces acting on a typical growing vapor bubble in a nucleate pool boiling heat transfer process. Rigorous expressions are developed for surface tension and buoyancy forces (static) and liquid inertia, bubble inertia, and drag forces (dynamic) considering the underlying physics of the process. These expressions are used to develop an equation for the bubble departure diameter. Using these expressions, we calculated the magnitude of these forces and the bubble departure diameters for distilled water, hydrocarbons, and refrigerants boiling on different surfaces (copper, brass, and stainless steel), over a wide range of heat flux and pressure. The calculations show the effect of heat flux and pressure on the magnitude of the forces and the bubble departure diameters, and aids in better understanding of the boiling process. These results can be used to develop accurate correlations for bubble emission frequency and heat transfer coefficients in nucleate pool boiling of liquids over a wide range of heat flux and pressure.

* Work supported Army Research Office and Office of Naval Research

Analytical Determination of Bubble Emission Frequency in Nucleate Pool Boiling of Pure Liquids*

A. Lee, P. R. Sharma, and K. Gayden

Department of Mathematics and Computer Science

Grambling State University

P. O. Box 1191

Grambling, LA 71245

U. S. A.

Abstract

In this work analytical expressions have been developed for bubble emission frequency (f) in nucleate pool boiling. Bubble emission frequency is the reciprocal of the sum of two periods: the growth period and the waiting period. Both static and dynamic forces are considered to evaluate the growth period. The expression for waiting period is obtained using the work of Han and Griffith, and Lippert and Dougall. These expressions reveal that both these periods are function of heat flux, pressure, boiling liquids, and the heat transfer surface on which boiling occurs. It is interesting to investigate the relative magnitude of these two periods over a wide range of operating variables. These two periods and the frequency have been calculated for hydrocarbons, refrigerants, and distilled water boiling on heat transfer surfaces made of copper, brass, and stainless steel for a wide range of heat flux and pressure using analytical equations developed in this investigation. The study establishes qualitative and quantitative relationship between these periods and the operating variables (heat flux, pressure, boiling liquids, and heat transfer surfaces). It also reveals how these periods impact heat transfer rates.

* Work supported Army Research Office and Office of Naval Research

Analytical Expression for Bubble Departure Diameters in Nucleate Boiling*

K. Gayden, A. Lee, and P. R. Sharma

Department of Mathematics and Computer Science

Grambling State University

P. O. Box 1191

Grambling, LA 71245

U. S. A.

Abstract

An analytical expression for bubble departure diameters is extremely important in developing expressions for bubble emission frequency and heat transfer coefficients. Traditionally, bubble departure diameters in nucleate pool boiling have been calculated by considering only static forces (buoyancy and surface tension) as suggested by Zuber and others. It is important to consider the contribution of dynamic forces (liquid inertia, bubble inertia, and drag) on the size of the bubble at the time of departure. In this study, an expression for bubble departure diameters is developed taking into account both static and dynamic forces. Using this expression bubble departure diameters has been calculated for a variety of liquids boiling on heat transfer surfaces made of different materials for a wide range of heat flux and pressure. The study suggests an equal role of liquid inertia and drag forces as that of the surface tension force. The magnitude of bubble inertia force is much less in comparison to other forces and may be neglected in determining bubble departure diameters.

* Work supported by Office of Naval Research and Army Research Office

Approaches to Multicast Group Allocation in HLA Data Distribution Management

A. Berrached, M. Beheshti, O. Sirisaengtaksin, A. de Korvin

Department of Computer and Mathematical Sciences

University of Houston-Downtown

Houston, Texas 77002

Keywords: DDM, HLA, Multicast, Clustering

Abstract

 

Data Distribution Management (DDM) is one of the six service categories defined in the HLA Interface Specification. Its purpose is to reduce the amount of data exchanged among HLA federates by allowing each federate to declare regions, in a routing space, in which they are interested in either receiving or sending data. At the core of DDM relevance filtering problem is how federates, or more specifically objects within federates, of similar interests (subscription/publishing) are clustered into multicasting groups. The traditional approach to this problem is the fixed grid-based approach used in the RTI-STOW. Fixed grid filtering, however, has several shortcomings. This paper describes three alternative approaches: an object clustering-based approach that groups objects based on proximity of their interest/publication regions, a multi-level grid-based approach that takes advantage of the underlying network architecture, and a hybrid approach that combines features from the grid-based and clustering-based approaches. Those approaches are evaluated and compared in terms of their filtering effectiveness and computational overhead in the specific context of the HLA data distribution.

 

A Hierarchical gird-based approach to data distribution in the high-level architecture

A. Berrached, M. Beheshti, O. Sirisaengtaksin, A. de Korvin

Department of Computer and Mathematical Sciences

University of Houston-Downtown

Houston, Texas 77002

Abstract

One of the key requirements for achieving large scale distributed simulations is to use the available communication bandwidth efficiently. In typical distributed simulations, a particular entity is interested in only a small subset of all other entities in the simulated "world". The objective of relevance filtering methods is to reduce the amount of irrelevant data exchanged among simulations by sending data only when and where it is needed. The High-Level Architecture (HLA), designated as the standard architecture for distributed interactive simulation, provides mechanisms to facilitate the implementation of various relevance filtering schemes. This paper gives an overview of the data distribution services provided by the HLA and analyzes, qualitatively and quantitatively, the performance of the traditional fixed-grid based approach used in current HLA implementations. A hierarchical grid-based approach is described in detail and its performance compared against that of the basic fixed grid approach.

 

OPTIMIZING CONCURRENCY CONTROL IN AN OODBS USING FUZZY SETS

M. Beheshti, A. Berrached, A. de Korvin, and O. Sirisaengtaksin

Department of Computer and Mathematical Sciences

University of Houston-Downtown

Houston, Texas 77002

Abstract

Object-oriented database systems (OODBS?s) deal with complex data types hence involves long transactions. Long running transactions often imply the use of a large number of resources that are inaccessible to other incoming transactions. The main objective of concurrency control mechanisms is to ensure the serializability of concurrent transactions. In addition, a great deal of research has been done in past several years to develop new algorithms that provides more efficient resource access to concurrent transactions. We have previously developed a concurrency control mechanism called Group Protocol (GP) for Object-Oriented Database Systems, which is a combination of Two-Phase Locking (2PL) and Serialization Graph Test (SGT) techniques. The Group protocol has the potential of achieving more concurrency at different levels of granularity by subdividing each transaction into a set of subtransactions each consisting of a group of operations. Our performance evaluation results have shown that the performance of this protocol depends to a great extent on how the groups are formed. Our study has also shown that information related to the particular database, application domain and the transactions being executed can be used in the process of identifying the groups. In many situations, however, such information is only partially available or imprecise. In this paper, we develop a methodology based on the fuzzy set theory that allows us to optimize the performance of Group Protocol based on the available information.

 

 

COMPUTER SECURITY MODEL BASED ON UNCERTAIN/PARTIAL INFORMATION

A. Berrached, M. Beheshti, A. de Korvin, C. Hu, O. Sirisaengtaksin

Department of Computer and Mathematical Sciences

University of Houston-Downtown

Houston, Texas 77002

Abstract

The main objective of this paper is to present an access control security model that determines whether a user is permitted to access and perform particular operations on particular data sets. Given the level of hostility of a user in a distributed system, and the sensitivity level of the data effected by the requested service, the local host/security guard is called upon to evaluate whether such a request can be safely granted. In general, information such as the hostility level of a remote user and the sensitivity level of a particular data set are either uncertain or only partially known by the local host, or both. Using fuzzy sets to represent uncertain/incomplete information, we present a framework that allows a local host to determine access permission based on such information.

EVALUATION OF GRID-BASED DATA DISTRIBUTION IN HLA

A. Berrached, M. Beheshti, and O. Sirisaengtaksin

Department of Computer and Mathematical Sciences

University of Houston-Downtown

Houston, Texas 77002

 

Efficient and effective data distribution is crucial to the performance of distributed simulation systems and their scalability. The High-Level Architecture (HLA), designated as the standard architecture for distributed interactive simulation, provides a framework for various types of simulation applications to inter-operate and interact consistently with each other. The HLA provides mechanisms to facilitate the implementation of various relevance filtering schemes. The actual performance of an HLA system, on the other hand, depends to a great extent on the approach taken to relevance filtering. This paper describes the data distribution services provided by the HLA and presents an evaluation of three grid-based approaches: the traditional fixed-grid based approach used in current implementations of the HLA, a hierarchical grid-based approach, and a multi-resolution grid-based approach. Those approaches are evaluated, qualitatively and quantitatively, in terms of their efficiency and filtering effectiveness.

 

The Implementation of a Two-dimensional Ising Model on a Distributed MPI Environment

Henry L. Williams and Patrick Bobbie

Florida A&M University

Tallahassee, FL 32307

ABSTRACT

The two-dimensional Ising model has been used to represent smoke particle behavior in computer simulations for military applications. This paper presents a method for implementing this model in a distributed local network using the Message Passing Interface (MPI) software system and Silicon Graphics (SGI) workstations in such a way as to determine how parallel computing architectures can be used to improve the performance of the Ising model for simulations in general. Very briefly, the Ising model can be viewed as a 2-dimensional lattice M of points representing particles which exhibit interactions among nearest neighbors. The overall energy E in the system is determined by the independent temperature parameter, T. In a massive system S, the number of particles can be very large so that M can in fact be represented by a very large sparse matrix. It turns out that M can be partitioned in an optimal way into smaller matrices as sublattices which can then be processed on different machines in the local network. This block decomposition is subjected to a thorough analysis. The total system operation is then reconstructed, or computed, from the various distributed block processing results. Efficient ways of blocking M and reconstructing S are considered. Essentially, this paper investigates a basic scheme for approximating the parallel operation of a real-time Ising smoke model in terms of a distributed local network.

 

 

Comparing Values Arising From Imprecise Information

Andr頤e Korvin, Richard Al󼯰>

Center for Computational Science and Advanced Distributed Simulation

University of Houston-Downtown

Houston, TX 77002

Ratan Guha

School Of Computer Science

University of Central Florida

Orlando, FL 32816

Clement Allen and Diedre Williams

Department of Computer Information Science

Florida A & M University

Tallahassee, FL 32307

Abstract

In many situations, the total information necessary to reach a decision is not available. The underlying probability distribution may only approximately be known as well as the pay-off resulting from taking a specific course of action. In other situations, the relative order of importance of different factors contributing to selecting a course of action is only roughly known. The main purpose of the present work is to make an intelligent decision under such conditions.

 

When the probability distributions and/or the pay-offs are not totally known, fuzzy expected values may be used to make a decision. A standard way to compare fuzzy values is to defuzzify, using the center of gravity of the membership functions. In this work, we develop other methods of comparisons, all of them versions of Jain?s maximizing set approach. In particular, we compare fuzzy values with unbounded supports to which the usual maximizing set method is not applicable.

 

When an expert is unable to assign a relative order of importance to factors involved in a decision, it is natural to (1) have the comparison stated in imprecise terms and (2) it is natural to take a consensus of experts. We consider (1) and (2) in the present work and this leads naturally to comparing objects that may not even be fuzzy sets of type II. We indicate different ways to compare such objects based on different interpretations of these values.

A Blackboard Architecture Interface for Intelligent Agents

Y. B. Reddy, Yolanda Wall, and Jacquiline Wall

Department of Mathematics and Computer Science

Grambling State University, Grambling, LA 71245

 

Abstract

Blackboard (BB) architecture is used when there is no clear hierarchical decomposition of the problem task. The BB system incorporates the diverse approach. The general architecture of BB leaves the control open. That is, it may be implemented as part of the BB or knowledge source (KS) or an extra module. The knowledge source is a combination of inference engines and knowledge bases. The BB collects the partial answers to the query and combines them into full solutions. The control strategy is opportunistic and module selection is dynamic. Since the solution space is distributed, parallel implementation of the problem is possible using BB architecture. In this paper we demonstrate the BB architecture for relevant information filtering.

 

 

 

Intelligent Interface Agents Software

Y. B. Reddy, Yvette Deramus, and Regina Ratcliff

Department of Mathematics and Computer Science

Grambling State University, Grambling, LA 71245

 

Abstract

The agents are semi-intelligent programs that assist the repetitive tasks. In most cases the agents take independent decisions within its domain. There are many software packages used on the Internet, in offices, and in other decision making places. The agents are used to check mail, answer the mail, filter unnecessary information, retrieve the relevant documents, and many similar agents in the medical field. Interface agents are used at fron-end to answer on the behalf of users. They activate other agents like mail checking, answer the clients, etc. The present research explores the relation between agent software and the interface operated by the user (human).

Intelligent Systems

Yolanda Wall, Jacquiline Wall, Y. B. Reddy, and Yvette Deramus

Department of Mathematics and Computer Science

Grambling State University, Grambling, LA 71245

Abstract

With advancing studies in Expert Systems, more and more attention is being directed toward Intelligent Agents (IA). IAs are software programs that are designed to operate as humans would but without human intervention. They are autonomous and have control over their own actions. Some Intelligent Agents are designed to work in conjunction with other Intelligent Agents. There are several kinds of Intelligent Agents and each has its own features. They include: collaborative, interface, mobile, hybrid, filtering/information, controllable, knowledge engineer, reactive, heterogeneous and dummy agents.

This paper discusses these aspects of IAs as well as the architecture, techniques and challenges of intelligent interfaces and where they fit in the scope of Intelligent Agents.

 

The Relevance Filtering Scalability Model at Gateways in Distributed Interactive Simulation

Y. B. Reddy

Dept. of Math and Computer Science

Grambling State University, Grambling, La 71245

Email: ybreddy@alpha0.gram.edu

 

Abstract

Relevance filtering technique reduces the traffic on the simulation network and improves the scalability of the distributed interactive simulation (DIS) systems. The paper develops a mathematical model for total savings of nodes while filtering of irrelevant nodes at gateways receiving or transferring the protocol data units (PDUs). In the present paper the reduction of load on the Gateway level is calculated for nonsymmetrical case (number of nodes at gateways are not equal). The formula works for 'n' number of gateways (n - is an integer), where each gateway may or may not contain same number of nodes.

Genetic Algorithm Approach for Relevance Filtering in Distributed Interactive Simulations

Dr. Yenumula B. Reddy

Dept. of Mathematics and Computer Science

Grambling State University, Grambling, LA 71245, USA

Email: ybreddy@alpha0.gram.edu

Keywords

DIS, PDUs, Genetic Algorithm, Best-Fit, Building Blocks

Abstract:

 

In the distributed interactive simulation (DIS) exercise, significant portion of data transmitted is redundant or irrelevant to the large portion of the entities. A number of techniques namely packet bundling, data compression, quiescent entities, dead reckoning, and relevance filtering were proposed to reduce the DIS communications traffic. None of them include a mathematical model to select the best-fit gateways or entities for relevance filtering. The main purpose of the relevance filtering technique is to reduce the communications processing requirements by relaying only to the relevant entities. To relay the message to relevant entities, we must select the best-fit entity or entities at gateways in DIS network. One of the appoaches to compute the best-fit entity is through building-block hypothesis in genetic algorithm (GA). The proposed building-block model develops best-fit entities at the gateways to receive or transmit the messages from other gateways in the DIS network. This fitness algorithm uses basic GA properties: Crossover, Mutation, and Variation.

 

Royal Road Fitness Function for Relevance Filtering in Distributed Interactive Simulations

Y. B. Reddy

Dept. of Math and Computer Science

Grambling State University, Grambling La 71245

Email: ybreddy@alpha0.gram.edu Fax: 318-274-6388

ABSTRACT

In the distributed interactive simulation (DIS) exercises, significant portion of data transmitted is redundant or irrelevant to large number of entities at gateways. The relevance filtering (RF) method helps to ignore the irrelevant information and reduces the communications processing burdens at the entity and gateway level. The difficult problem is to identify the relevant entity basing on the exact position of a local entity and dead-reckoned position of this entity at other gateways. This is a filtering error. The Gateway Dead-Reckoning + Reachability Range method and similar other methods help to reduce the possible occurrence of the filtering errors. These approaches do not have mathematical base. A new approach called Royal Road fitness function was proposed to select the best-fit entity at the gateways to receive or transmit the messages from other gateways in the DIS exercise.

 

Page maintained by CST Web Support Technician

Last updated or reviewed on 4/23/09

   Click here to print this page

Find us on Facebook   Follow us on Twitter   Read Skyline   Join us on LinkedIn   News RSS   Events RSS