Center for Computational Science and Advanced Distributed Simulation (C2SDS)
We plan to expand our work in the following areas:
Investigate some soft object deformation approaches in order to incorporate with our method for overcoming the restrictions on the geometric models without increasing the complexity of reference model.
We plan to extend it to be less restrictive in serving the user's need for a flexible user interface. In particular, simple primitives such as points, lines and boxes are investigated to provide the local and global control for users to specify features.
Develop a tool or a methodology to analyze or determine the ill-behavior of transformation that occur while intermediate sequences are generated. These kinds of behaviors are such as the distortion, the surface self-intersection and the fold-over problems.
Two-Dimensional Simulation of Fire Using the Ising Model and Monte Carlo Simulation - The Implementation
P. O. Bobbie, M. Arradondo, C. Birmingham II, R. Giroux, S. Roper(FAMU)
Research Objectives and Significance:
The objective of this research focuses on developing a model to simulate fire in a virtual environment using a distributed parallel computing environment. The approach involves developing a two-dimensional (2-D) simulation of fire that improves on current 2-D fire simulations and uses Monte Carlo simulation techniques and stochastic models like the Ising Model as templates. A 2D simulation, FireSim2, was developed to test the theoretical background that could lead to a 3-D fire simulation in a virtual environment.
The effects of fire and smoke it produces can have a significant impact on the strategic and tactical goals of a battle unit. Military personnel training in virtual world simulations and using training simulators may experience a realistic representation of these phenomenological elements that occur in real world theaters of war. As such, the effectiveness of combat simulators may improve as simulated fires and smoke mimic their real counterparts. Consequently, the cost of effectively training soldiers may be reduced.
FireSim2 is currently being expanded from 2-D to 3-D. Additional work is being done on changing the output display method. Currently, the output window is an array of 3600 panels that simulate pixels. With each iteration of the simulation, each of the panels is updated regardless of whether its color value changes. The modifications being done involve getting the array to update only those panels that register a color change. Furthermore, algorithms for simulating fire spread, fuel consumption, smoke generation and reactions to wind are being developed. Cross-platform functionality is being tested to ensure that the fire simulation operates on a variety of computer platforms. A distributed parallel processing technique was investigated to increase the speed of the computation and to expand the capacity of the program to process large arrays.
The ADSRC group at FAMU has developed and improved on the computation and visualization of a two-dimensional fire simulation in its endeavor to create a three-dimensional 3-D fire simulation for virtual environment training simulations. Two main goals have been met in producing this simulation:
The simulation system, called FireSim2, uses temperature difference equations, Monte Carlo simulation techniques and the Ising Model (as a template) to produce a visually accurate 2-D simulation of fire with random, erratic behavior.
FireSim2 is an improvement over current 2-D fire simulations, since it generates simulated images of fire that are recognizable to individuals viewing the graphical display. This improvement has been accomplished by using arrays of several thousand elements, each element representing a single pixel, and using probability values that allow the simulation to mimic fire in two dimensions.
Graphical and Visual Analysis of Three-Dimensional Geometric Objects
P. O. Bobbie, D. Walters, K. Boateng, W. Mathurin (FAMU)
Research Objectives and Significance:
The purpose of this research is to develop algorithms to determine the equivalence of two or more geometric models. This simply means taking various three-dimensional models and systematically performing a point-wise comparison of the properties/attributes at each coordinate (x, y, z) point of one model against others. Two models are said to be geometrically equivalent if all the properties match all points within a statistically set confidence interval.
This research study is beneficial to a variety of applications such as the military, the medical/pharmaceutical community, and image search and visualization research communities. For the medical/pharmaceutical community, the results of the research will improve on the development of techniques for molecular search and structural/conformational analysis. For example, if the geometric make-up of a protein is known but not its function, it can be matched by determining the complementary structure of the protein and, consequently, for docking purposes. This will increase the efficiency and effectiveness of the docking process, a useful tool for researchers at, e.g., the Army Research Institute for Infectious Diseases (AMRIID). In the military, such techniques will help in object recognition/identification by matching the subject against a database of target images. For example, the identification of enemy weapons/vehicles can be performed by use of this geometric analysis technique.
The main tools and techniques used in this research included Open GL, Java, C++ and other object-oriented software tools. Algorithms were developed to correlate different three-dimensional geometric models and determine whether they are equivalent. Methods used include the utilization of the Voronoi and Delaunay Triangulation algorithms, which were implemented into the graphical analysis procedure. We also developed ideas on statistical methods such as root-mean square analysis to establish confidence levels for acceptance/rejection of the recognition/matching process.
Thus far, several of the sub-tasks that were part of the overall research goal have been completed. The main accomplishment was the development of an algorithm that performs the initial step in the graphical analysis process. The algorithm uses the quicksort method of sorting each of the coordinate points by appropriate axis. That is, sorting was performed on each of the X, Y, and Z dimensions of the 3- coordinate points and matching was done along that dimension. This step enabled the point-wise comparison of each coordinate point efficiently with a computational complexity of O (NlogN), where N is the number of coordinate points. The integration of this algorithm into Voronoi diagrams and Delaunay Triangulation algorithms is currently under development.
Computer Representation and Control of Images in ADS Environments
H. L. Williams, Girish Kota, Girish Patil (FAMU)
Research Objectives and Significance:
Much work has already been done so far in the design, development and implementation of efficient physics-based mathematical models for the computer simulation of fire, smoke, and other phenomenological elements in distributed simulation environments. Our main efforts have been focused on the expansion of the well-known and simple Ising model to represent these phenomenological elements as systems of tightly coupled particles. Smoke and fire have been successfully represented graphically in our ADSRC computer environment. In this sub-project, our current research emphasis has now shifted from the problem of creating graphical representations of elements like fire and smoke to the problem of inserting those graphical images into a dynamic virtual environment. This is a new direction in the ADSRC project. This summer we initially limited our concentration on the exploration and development of ways to reduce the large volume of image data streams which is created when smoke is simulated in our Ising simulation model. This is a crucial step in the implementation of scenarios involving smoke interactions with other entities in virtual environments. This capability allows for realistic simulations of battlefield operations and is of wider relevance to government and industry.
The incorporation of real-time obscurants into computer simulations makes the virtual battlefield much more realistic. The reduction in size of the large image data sets also improves the overall network performance of scaled simulations. This allows for military training at higher levels of complexity thereby enhancing readiness for battle.
It was supposed that the two-dimensional Ising model was used to represent smoke as a system of tightly coupled particles. Consequently, the effects of temperature, magnetization, and force-field on the particles as a result of the spinning effect of the particles were assumed to be incorporated into the computer image representation of the smoke. We limited our initial analysis to smoke simulations only. (Other phenomenological elements such as fire and aerosol sprays will be considered at a later time.) We concentrated our efforts on the design, development, testing and implementation of a Graphical Control Engine, (GCE) which would allow for the representation and control of visual objects in a virtual battlefield simulation. Ways to devise methods for building and implementing the GCE were heavily investigated using digital signal processing and image processing techniques, concepts, and tools such as the discrete Fourier transform and wavelets. OpenGL was the main software tool used in the development of the GCE to represent and control smoke images. The C programming language was also used as needed. Various standard image formats such as MPEG and JPEG were targeted for investigation as useful data formats.
The development of the Graphical Control Engine proved to be quite dependent upon the development of more basic items. We employed two electrical engineering graduate students to assist us with the digital and image processing aspects of the project. However, their learning curve was steep for the ADSRC application environment. In particular, they spent considerable time learning OpenGL and other software tools in our environment so that most of our work was done in the areas of the design of the GCE and the development of a tutorial on wavelets and other useful DSP and image processing information. The representation of smoke particles in OpenGL was also completely designed.
The following tasks will be included in our future work.
Develop, implement and test a prototype Graphical Control Engine (GCE) which will facilitate the
simulation of smoke interactions and effects in a virtual environment.
Expand our work on the prototype GCE to other phenomenological elements such as fire and aerosol sprays.
Investigate how the simulations using the GCE compare with existing work on the simulation of fire and smoke.
Conduct a more extensive review of other techniques such as CFD techniques for the development of mathematical, physics-based models to simulate fire and smoke behaviors.
A Model for Computer Simulation of Fire for Virtual Environment Training Simulators -The Theoretical Models
P. O. Bobbie, R. Giroux, S. Roper, C. Birmingham II, M. Arradondo (FAMU)
Research Objectives and Significance:
The effectiveness of simulation technology and applications depends upon how well they mimic real battle situations and scenarios that soldiers may experience. Therefore, simulating phenomena like fire and smoke is an important part of virtual battlefield training simulators. The objective of this sub-project concerns the development of a two-dimensional (2-D) fire simulation that captures the visual and physical reality of fire. The fundamental component of the research is a 2-D fire simulation program that mimics the random behavior of fire without the effects of wind. To achieve this task, the methodology of current 2-D fire simulation programs has been studied, improved and optimized to generate realistic, visual representations of fire based upon thermodynamics, heat transfer theory and stochastic modeling. In this subproject, we focused on the theoretical underpinnings
A corpus of sound mathematical models that encompass heat transfer, temperature, chemical reactions, and stochastic effect is crucial to the precise calculation and simulation of fire. These models add precision and realistic representation to phenomenological effects in virtual battlefield simulations.
Several steps were necessary to develop FireSim2. They include using large data arrays, implementing temperature difference equations, incorporating Monte Carlo simulation techniques, utilizing the Ising Model as a stochastic modeling template and integrating optimal probability values to introduce the predictable and erratic behavior of fire.
The steps used to develop FireSim2 are as follows:
-Data arrays have been expanded to 60-element by 60-element arrays (3600 elements) to increase the visual resolution of simulated fire.
-The adiabatic flame temperature of the fire source has been computed to be 2230 K for the complete combustion of methane.
-Temperature difference equations have been used for each array element to determine the tendency of each element.
-If the temperature difference is positive, the element tends to increase its temperature (absorb energy).
-If the temperature difference is zero, the element tends to maintain its temperature.
-If the temperature difference is negative, the element tends to decrease its temperature (emit energy).
-Several probability values are tested to find an optimal set of values that introduce random, yet predictable, behavior characteristic to fire.
-Positive tendency: 0.80 to 0.99
-Neutral tendency: 0.50 to 0.99
-Negative tendency: 0.80 to 0.99
-Monte Carlo simulation techniques were implemented to create a discrete-time simulation.
-The Ising Model was employed as a stochastic modeling template for the program.
-Several different colors were assigned to temperature ranges to visualize the fire in a display window.
-Source code for the program, FireSim2, was written in Java 2.
-Several runs of FireSim2 were utilized to determine the probability values that give the best results.
-Positive tendency: P = 0.95
-Neutral tendency: P = 0.50
-Negative tendency: P = 0.95
The completion of the above procedures has led to the development of the current version of FireSim2 which successfully simulates an ideal fire in 2-D without the presence of wind.
The FireSim2 fire simulation program currently runs on any platform that supports a stable Java environment. It was developed on an SGI platform by the FAMU ADSRC laboratory group.
Since virtual environment simulators used by the U.S. Armed Forces use displays with 3-D objects and features, FireSim2 must be expanded from 2-D to 3-D. Furthermore, algorithms for simulating fire spread, fuel consumption, smoke generation and reactions to wind must also be developed. Additionally, cross-platform functionality must be tested to ensure that the fire simulation may operate on a variety of computer platforms especially in PC environments to make it economically viable. Distributed parallel processing aspects for speed-up must also be continued to improve the calculations of the simulation and graphics generation and to expand the capacity of the program to process data arrays.
Using the Message Passing Interface (MPI) for Computational Speed-up, Color Mapping and Visualization
P. O. Bobbie, H. L. Williams, S. Roper, R. Giroux, C. Birmingham II, M. Arradondo (FAMU)
Research Objectives and Significance:
Use of the MPI environment was expected to increase the speed of the computations required of the 2-D Ising calculations. (This component of the algorithm complements the sub-project on using heat transfer equations and other models for modeling fire behavior.) It was assumed that the Message Passing Interface system would reduce the time necessary to process large lattices both mathematically and graphically. A parallel version of the Ising algorithm was implemented and currently runs in the MPI environment on a cluster of SGI workstations. Currently, the Java/OpenGL API environment is being used to render and visualize the simulation to understand the inherent phenomenology. In the heat transfer subtask, fire behavior was modeled according to the Ising lattice function, which was decomposed into sub-lattices.
Analysts studying fire and smoke often employ computer simulations which model the behavior of these statistical mechanical systems using physics and mathematical theory. Computer programs like CFAST and FAST create zone models of fire and smoke that predict the effect of temperatures, various gas combinations, and the height of smoke-layer in multi-compartment structures. Distributed simulation environments have played a significant role in an ever-increasing number of critical military and commercial applications. Applications ranging from fire safety training to virtual warfare benefit tremendously from this emerging technology. These synthetic virtual environments necessitate the inclusion of real-time obscurants such as fire and smoke into the computer simulation in complex ways.
Real-time simulations of large systems with two-dimensional or three-dimensional lattices require significant processing power. Using a distributed computing environment with an MPI system reduced the time required to process large lattices mathematically and graphically. In particular, an increase in computational speed was observed when the sub-lattice sizes were increased. The Monte Carlo simulation was performed in a five-node distributed MPI environment. This environment used four client nodes and the host node - which were hard-coded into the simulation program. Therefore, the configuration required N+1 nodes (the host and N clients) such that each sub-lattice size was a 5x5 matrix/grid. Four MPI 'daemon' processes were spawned on the nodes. Each 5x5 sub-lattice was mapped onto one SGI O2 node. Using the MPI and Ising calculations, it was possible to represent a cluster of particles as a 2-D grid. Working from this architecture, visualization of some of the most random physical phenomena became relatively easy.
Results from the numerical calculations in the simulation provided sets of input data for an OpenGL backend graphics program to render the graphical output. The OpenGL subsystem mapped the computed data values into graphic elements by assigning different colors to the temperature values. For example, each particle might have an initial ambient temperature value that corresponds to a transparent color to simulate the presence of air. With the introduction of a flame element, the particles that actually comprised the flame were varied in color from bright yellow in the hottest areas to dark red in the cooler areas. Those particles near the edges of the fire were given a gray color to indicate the presence of smoke. Incorporating heat transfer principles into the program allowed for the calculation of subsequent temperatures for each particle in the array. A recursive calculation of the basic heat transfer algorithm, coupled with graphical color assignment and mapping techniques, was used to generate a simple animated fire effect.
1.3 Knowledge Based Systems (ADS-KBS) Task
Task Coordinator: Richard Al?
Objectives and Significance:
The main objective for the fourth year of this project was to further our knowledge and to attack problems as related to software issues, in general, and specific applications in advanced distributed simulation in particular. Our foci are on: Fuzzy Quantities Estimates based on Uncertain Information; Extension of a Computer Security Model Based on Uncertain/Partial Information; Design of Unmanned Vehicle Controller; Spoken Language Dialogue in a Distributed Environment; A system for Image Matching for Subject Identification; Extending Temporal Query Languages to Handle Imprecise Time Intervals; Data Mining and Knowledge Discovery in Database Systems and for Classification and Prediction Rules in Large Data Base Systems; Techniques and Applications of Fuzzy Set Theory to Difference and Functional Equations and their Utilization in Modeling Diverse Systems; Concurrency Control; Validation of Authentic Reasoning using Expert Systems; Applications to Blood Analysis and to Quality versus Sample Size to meet Quality Goals. We show how the following tools significantly impact the above problems: Fuzzy Logic, Fuzzy Neural Networks, Genetic Algorithms, Image Analysis and Data Base Management Tools.
Our efforts produce a major impact on how future software may be written. In Estimating Fuzzy Quantities we present a general method for generating fuzzy information. In our Computer Security Model we enhance our previous investigations of an access control security model to one that allows us to obtain the probability of hostility of a user in a system based on a set of available fuzzy values. Our Unmanned Vehicle Controller project investigates methods to design such that will respond to changes in the environment in real time and to develop a minimal set of rules to efficiently guide the vehicle with minimal computing requirements. The Spoken Language Project develops a methodology and software development environment for creating distributed speech applications. The Image Matching Project is proceeding to develop an image-matching tool for investigative functions. Our Temporal Query Language project considers techniques for handling imprecision in temporal databases. Data Mining and Knowledge Discovery is concerned with the discovery of useful patterns that can be extracted from databases. The Data Mining Project for Classification and Prediction Rules in Large Data Bases investigates classification-modeling algorithms. In Concurrency Control we extend previous work and the use of Navigational Transaction Language to indicate how transactions perform on a spectrum of objects. There are several projects that consider the application of fuzzy set theory to such things as Blood Analysis, to meeting Quality Goals and the Validation of Authentic Reasoning.
Combat conditions provide very uncertain environments and first approximations of neural nets and fuzzy systems may be too rough. Combining neural nets and fuzzy rules offer the advantage of a system with learning capabilities while preserving a human like type of reasoning. In security systems determining if a remote user is permitted access is often based on uncertain information and we present methods to generate fuzzy information. When making decisions under uncertainty we develop methods to compare fuzzy sets or quantities arising from them. Computer Security is a prevalent problem for which we offer a flexible approach to software accessibility for users whose characteristics are only partially known. When dealing with large databases, concurrency control techniques allow a large number of simultaneous users. We provide some approaches that will help the army to better utilize and organize their databases. Fuzzy Concurrency Control provides additional insight on how to control the concurrency problem. These techniques are also applied to determining the correct level of carbon dioxide in the blood-an important factor when troops are exposed to toxicity. Also sampling is crucial in any activity involving production-especially large-scale production and the techniques apply to quality control aspects. When decisions on the battlefield must be done quickly and often without having full information our methods for determining the larger of two continuos fuzzy sets with unbounded support will assist. Also we present methods to validate authentic reasoning to numerous expert systems development incorporating partially conflicting views of the experts. In application to inventory analysis, learning models, system theory, genetics and ecological models our techniques applied to difference and functional equation will be useful. With distributed computer networks where systems allow general access for trustworthy users our methods are applicable. Temporal databases are extremely useful in many army applications such as virtual training exercises. Our methods enhance the query languages and provide them with a capability to deal with imprecise time intervals significantly increasing their usefulness and applicability. Algorithms for efficient data distribution and handling are provided. Data mining in general and prediction and classification modeling in particular have numerous applications such as course of action selection by producing a model based on data accumulated from previous war scenarios. The design of unmanned vehicle controller results can be generalized and applied to automatic control systems in general. The Spoken Language dialogue in a distributed environment will support a broad range of speech applications.
Imprecise and conflicting data was input into classical neural nets and the effects were analyzed. We developed a framework for a security model using fuzzy sets to represent uncertain/incomplete information. We also devised a method for estimating fuzzy quantities as a user's level of hostility which themselves depend on a set of fuzzy factors. We investigated the problem of handling time impreciseness in temporal databases and designed three models for the representation of imprecise time intervals. We analyze the underlying logic and important properties of each model. Extensions to existing query constructs at both transaction level and the operator level have been developed. Our models and extensions enrich the flexibility of temporal databases and can be used to help users obtain more meaningful replies for their temporal queries. We have implemented the ID3 algorithm, a popular classification algorithm that uses a decision tree method to generate classification rules from a training data set and evaluate its performance with several crisp data sets. We have investigated the underlying physics model for vehicle navigation and obstacle avoidance and defined a preliminary set of relevant parameters. We have also devised a genetic algorithm based method for generating the navigation rules. A strategy for software access involving uncertain quantities such as expected losses, user hostility, and allowable damage amounts was developed. The performance of protocols for multi user access for database has been shown to depend heavily on how transactions and subtransactions are formed. We study these protocols when not all of the information is available.
Data Mining and Knowledge Discovery In Database Systems
Jamal R. Alsabbagh (GSU)
Research Objectives and Significance
The surge in research activity in data mining during the 1990's is due to the fact large users such as the military, telecommunication providers, large retailers, and financial institutions have been collecting vast amounts of data over the last few decades. Furthermore, the amount of data being stored is increasing ever more rapidly due to advances in data acquisition techniques and the availability of inexpensive large-storage devices. Practitioners and researchers soon realized that such data can become extremely valuable in decision making if potentially useful trends and patterns can be extracted from it with minimal human intervention.
Researchers have identified several forms of useful patterns that can be extract from databases. Our research is concerned with the discovery of the form known as association rules. Briefly, given a set of possible items to choose from (store holdings, features in medical images, etc.) and a large number of choices of different subsets (actual customer transactions, actual features in a medical image, etc.), association rules represent the items that tend to appear together in these subsets above some threshold frequency. We note here that the problem of discovering association rules is also known as the market basket analysis problem.)
The development of efficient algorithms and expertise in the field of data mining will enable the ADSRC to apply it to various military databases.
The problem of discovering association rules can be briefly described as follows. Let I be a set of n items. Let T be a database containing a large number of transactions each of which consists of a subset of the items of I. The database T represents the historical data to be mined. Also, let X (called an itemset) be any subset of the items of I. Now if X occurs, in the transactions, more frequently than some user-defined frequency threshold, then it is called a large itemset (meaning that the level of support for the itemset is large).
It is known that generating the large itemsets is computationally expensive. On the other hand, once the large itemsets are generated, then deriving the association rules is straightforward. Most algorithms for finding large itemsets that are reported in the literature represent variations of the bottom-up approach of the Apriori algorithm developed by researchers at IBM Almaden Center. In principle, the Apriori algorithm requires scanning the transaction database a number of times which is exponential in the size of the longest itemset. Therefore, there is a significant bottleneck here since the transaction database can be very large.
In this research, we propose an algorithm which requires 2-3 passes over the database. In the first pass, it identifies the relevant itemsets of length one. In the second pass, it identifies the relevant itemsets of length two. It uses the latter itemsets to identify the potential itemsets of lengths three and above. In the third, and final pass, it identifies the actually relevant itemsets of lengths three and above.
A survey of the literature was conducted. The algorithm was formulated and will be presented at the ADSRC annual meeting in November 1999.
We plan to perform the following tasks:
Complete the writing of a paper and report our result in a suitable conference.
Implement the algorithm and evaluate its performance relative to other algorithms.
Formulate, implement, and evaluate a number of heuristics that seem, at this stage, to be suitable for improving the performance of our algorithm.
Publish results in a suitable forum.
Computer Security Model Based On Uncertain/Partial Information
R. Al?, M. Beheshti, A. Berrached, and A. Dekorvin (UHD)
Research Objectives and Significance:
Distributed systems provide tremendous new opportunities and benefits to their users but also raise new challenges. Because of the many different components, functions, resources, and users and the tight coupling between the cooperating systems, security in distributed systems is more difficult to achieve than in regular computer networks.
Previously, we have developed an access control security model that determines whether a user is permitted to access and perform particular operations on particular data sets based on the user's level of hostility and the sensitivity level of the data affected by the requested service. However, such information as a user's level of hostility tend to be difficult to represent since they depend on several attributes which are themselves "fuzzy" . The main objective of this project is to extend our previous work and develop a methodology that allows us to obtain the probability of hostility of a user in a system based on a set of available fuzzy values.
This work may have applications for security issues on distributed computer networks where the systems allow general access for trustworthy users. Methods developed in this project can be applied to other applications.
Previously, we have developed an access control security model that determines whether a user is permitted to access and perform particular operations on particular data sets in the context of a distributed system. Given the level of hostility of a user in a distributed system, and the sensitivity level of the data effected by the requested service, the local host/security guard is called upon to evaluate whether such a request can be safely granted.
In general, information such as expected losses, user hostility, and allowable damage amount are very difficult to assess precisely in numerical terms. So it is natural to express them in the form of fuzzy sets. In linguistic terms, a user can be defined as very hostile, somewhat hostile, or not hostile, and the amount of damage can be expressed as very high, low, very low. In fuzzy expressions, somewhat hostile, for instance, can be expressed as:
Ph = .9/.2 + .8/.1
where the supports (i.e. .2 and .1) are the probabilities of the user being hostile and the .9 and .8 are the membership values. Expected loss can also be expressed in a similar fashion, with the supports being expressed in dollar units for example.
We establish a procedure to determine whether a user xi should be allowed to perform the operation oj on data dk as follows:
(1) Find expected loss Eijk by evaluating EWjk ' Phi, where EWjk is the worst loss expected from performing operation oj on data dk, and Phi is the estimated hostility level of user xi.
(2) Compare Eijk with the organization damage tolerance t (amount of damage the organization can tolerate) by constructing a maximizing set M from fuzzy sets Eijk and t
(3) Compute Eijk U M and t U M then make a comparison between the sets. If the greatest membership value of Eijk U M is greater than the greatest membership value of t U M, then permission is denied since the expected loss from allowing user xi to perform operation oj on data set dk is larger than amount of damage the organization can tolerate. On the other hand, if the greatest membership value of is less than the greatest membership value of t U M, then the user x is allowed to perform the operation oj on the data dk.
However, the effectiveness of the model depends to a great extent on how accurate one can estimate such fuzzy quantities as a user's level of hostility and expected worst lost.
It is clear that a user level of hostility (Ph) depends on a number of attributes about the user which are themselves "fuzzy" (e.g. whether the user is attempting the access from a remote or local host, whether the user is attempting to access the local host from a friendly or hostile organization/country, the level of trustworthiness of the host, how closely the requested operation resembles the user's previous access habits etc.). The traditional method used to estimate such a fuzzy quantity is to compute the fuzzy value of each of the attributes on which it depends on a normalized scale and take their (weighted) average. Though simple and computationally efficient, this method is not likely to produce accurate estimates since the average function is only one of an infinite number of possible relationships. In this project, we developed a method for determining the relationship between a user's level of hostility and the set of factors that effect it. The relationship is determined from a training set (Ak, Phk) where each member of the set Ak is a fuzzy set of user attributes that are deemed to effect the user level of hostility, and Phk is the set of corresponding user hostility levels.
We developed a framework for a security model using fuzzy sets to represent uncertain/incomplete information. The model allows a local host to determine access permission based on estimates of a user level of hostility and the expected worst lost that can be caused by granting such permission. We have also devised a method for establishing a fuzzy relation between a "fuzzy" quantity (such as a user's level of hostility) and the attributes on which it depends. The important feature of this method is that the fuzzy relation that it establishes is the maximal relation in the sense that it represents the strongest correspondence between the target quantity and its dependents. We have also devised a method for approximating fuzzy relations when an exact maximal solution cannot be found using the above method. This part work has already been published the proceedings of the IPMU2000 (Information Processing and Management of Uncertainty) conference. We are currently in the process of conducting a performance evaluation analysis of this algorithm and comparing it to Neural Net performance for selected applications. We have also extended this methodology to another relevant application, namely military threat analysis in the context of target select in the battlefield.
Once the performance evaluation study is complete, we plan to have our results published in an appropriate conference or workshop.
Hoang Chau and Khanh Do, two computer science students, have implemented a simulation program for the proposed algorithm and have tested it with several sample applications.
Threat Analysis Using Fuzzy Set Equations
R. Al?, M. Beheshti, A. Berrached, A. Dekorvin (UHD)
Research Objectives and Significance:
One of the important factors in realistically simulating individual vehicles in the virtual battlefield is the targeting behavior of vehicles. Target analysis and selection involve several factors such as target detection, target identification and threat analysis. The main objective of this project is device a threat analysis algorithm based on fuzzy set theory. Fuzzy set theory has the capability of expressing ambiguous and complex situations where some or all of the information available for evaluating the threat level of a set of targets is uncertain or not precisely known, as is often the case in real life situations.
This work is directly relevant to Army applications as all battlefield simulators, such the Modular Semi-Automated Forces (ModSAF) system, use one algorithm or another for threat analysis and target selection.
The threat posed by various target is a function of a variety of circumstances and factors. Based on a review of the literature, we have identified nine factors involved in the threat analysis including, aggregate threat assessment, near count threat, target's effective range, target firing status, aspects angle, relative elevation of target, target movement, target type, and sector or fire. The traditional method used to estimate the threat level of a target is to compute the fuzzy value of each of those factors, on a normalized scale, and take their (weighted) average. Though simple and computationally efficient, this method is not likely to produce accurate estimates since the average function is only one of an infinite number of possible relationships. In this project, we developed an algorithm for determining the relationship between a target's threat level and the set of factors that effect it. The relationship is determined from a training set (Factork , Thk) where each member of the set Factork is one of the factors that effects the target's threat level, and Thk is the corresponding fuzzy set of threat level. The problem can then be stated as folows: Given a training set of pairs of fuzzy sets (Factor1,Th1), (Factor2,Th2),...,(FactorN,ThN), estimate R such that the system of fuzzy equations Ak"R = Phk is satisfied for all k=1,2,.,N, where the " operator is a sup-min operator.
We have devised a method for establishing a fuzzy relation between a "fuzzy" quantity, such as a target's threat level, and the factors that effect it. The important feature of this method is that the fuzzy relation that it establishes is the maximal relation in the sense that it represents the strongest correspondence between the target quantity and its dependent factors. This algorithm can be looked at as a learning algorithm since as more precise information is obtained, the obtained relation can be fine-tuned to better fit the new training set. We have also devised a method for approximating fuzzy relations when an exact maximal solution cannot be found using the above method. We are currently in the process of conducting a performance evaluation analysis of this algorithm and comparing it to Neural Net performance for selected applications.
Once the performance evaluation study is complete, we plan to have our results published in an appropriate conference or workshop.
On Firing Rules of Fuzzy Sets of Type II
O. Sirisaengtaksin, Chenyi Hu, and Andre de Korvin (UHD)
Research Objective and Significance:
The main objective of this project is to investigate a set of rules where antecedents are fuzzy sets of type II and input are fuzzy sets. First a special case where all the fuzzy sets involved have interval-valued memberships is determined. We develop an approach to defuzzify such sets and range of possible actions. The strength of a rule is, in general, an interval and the firing is an interval. Additional information may determine how an action is to be picked from the firing interval. In the second part, we generalize these considerations to fuzzy sets of type II, using the a-cuts to carry off this generalization. An alternate approach defines the strength of a rule as a scalar instead of a fuzzy set.
The results of this project provide a way to deal with rules in nature. This will help in developing intelligent control systems especially autonomous systems and agents.
We develop an approach to defuzzify fuzzy sets and a range of possible actions with interval-valued memberships. We assume that the strength of a rule is an interval and the firing is also an interval. In general, we take into account any additional information that may determine how an action is to be selected from the firing interval. Then we generalize these considerations to fuzzy sets of type II, using the a-cuts to carry off this generalization. An alternate approach defines the strength of a rule as a scalar instead of a fuzzy set.
The result was published in the International Journal of Applied Mathematics, Vol. 3 (2000), pp. 151-159.
Modeling Dust Behavior Using Fuzzy Controllers
O. Sirisaengtaksin (UHD)
Research Objective and Significance:
The objective of this project is to integrate a physical based model together with fuzzy controllers to visualize dust behaviors. This project will focus on realistic simulation of dust behaviors that generated by a fast traveling vehicle. The virtual environments that involve with moving vehicles will be able to simulate physically realistic dust behaviors.
In many virtual environments and distributed interactive simulations, we hope to simulate trucks, armored vehicles, and other moving objects. However, the simulations for objects travelling on an unpaved road typically do not generate dust behaviors. Simulating physically realistic, complex dust behaviors is useful in interactive graphics applications, such as those used for education, entertainment, and training. The results of this project can provide a realistic virtual environment in simulating a moving vehicle with dust behaviors.
We will develop a physical-based model for dust behaviors. The model will utilize particle systems, rigid-particle dynamics, and fluid dynamics. Fuzzy controllers will be integrated for the ease of computations and for fast simulations. OpenGL will be used to generate graphics in simulating the model.
Physical model for dust behaviors
We have investigated the physical problem of airflow around a moving vehicle using separated flow. The main of the flow is the turbulence at the boundary of and behind the vehicle's movement. The model understudied using the elliptic nature of the flow describes pressure effects:
The fluid-dynamics results provide the laminar flow around the vehicle where the velocity V and pressure p at any point in the flow volume are calculated, where
The air velocity can be described using Prandtl's boundary layer theory as follows
where u is a random vector, and L is the size of the vehicle (Lcar, Wcar, Hcar).
A dust-particle dynamic should consist of 3 stages -- generation, movement, and extinction. We consider a dust particle is generated with its initial position and velocity. Once a dust particle enters the air, its motion follows Newton' law. The external forces include the gravity force and air-friction drag caused by turbulent airflow around the vehicle and the surrounding wind. The drag force caused by vehicle movement can be described by
and Vp is the current velocity of the particle, Vair is the current velocity of the fluid at the point of the particle relative to the car, and Vcar is the current velocity of the car.
We have developed a physical model for a dust-particle dynamic. This model has been implemented to develop a fuzzy controller that mimics the mathematical model of dust particle dynamic. We are in a process of developing a simulation using FuzzyCLIPS and OpenGL to test our result.
Intelligent Mobile Cellular Networks
M. Balaram (GSU) and M. Bassiouni (UCF)
Research Objectives and Significance:
The communications revolution is both driving and redefining the daily lives of our global community. It has a profound impact on our business and today's economy. Our computing paradigm has shifted towards network centric and its users are more mobile. They want to access network any time from anywhere and access the network resources despite bandwidth limitations. We require mechanisms that respond to user requests and tools to process data at its source and ship only compressed answers to the user. We must not overwhelm the network with large amounts of unprocessed data. The research objective here is to assess our current stand in the arena of wireless communications. To evaluate our capability to seamlessly integrate voice, video, and data transmission elements into a single integrated telecommunication system with embedded intelligence.
Without doubt, the next era of wireless market will not be for companies that simply carry a call. It is for those who give users the intelligent options on how to field that call, the most versatility beyond voice, and the easiest and secure methods for changing the makeup of their service packages that deliver the best possible services at the lowest possible cost. We focus our attention on third generation mobile systems, which provide a true "anywhere, any time," cost effective wireless access to the information age services to all its users. We will identify the efforts of major players in this arena around the globe. Within this framework the significance of this project is undaunted.
U. S. Army has varied technical challenges of national importance that need innovative solutions through the application and integration of hardware, software, and networking technologies with embedded intelligence. In our government, having a clear understanding of events and knowing the exact location and availability of all resources can mean the difference between mission success and failure. The following three current projects are of notable importance in this regard. The U. S. Army has initiated the Digital Switched Systems Modernization Program (DSSMP) and selected Lucent Technologies as DSSMP prime contractor. The DSSMP contract efforts are open to other Department of Defense (DoD) agencies and all federal agencies. The project is expected to provide capability to seamlessly integrate voice, video, and data transmission elements with a single telecommunication system. SRI is involved in developing Battlefield Wireless Command, Control, and Communications System. InCON?, a wireless, portable, information management system that can monitor and communicate with aircraft, ships, vehicles, individuals, and command centers is the product that resulted. InCON? provides deployed tactical forces and command centers with a current, shared view of the entire battlefield situation by using a two-way exchange of information in near real time. The key design feature of InCON? is the seamless interoperability and information sharing between different types of users, with varied computing and communications devices. DARPA has sponsored Knowledge Sharing effort to develop techniques and tools that promote the sharing on knowledge in intelligent systems. As a result, a Knowledge Query and Manipulation Language (KQML) has evolved for use in current and future intelligent information integration projects.
The wireless intelligent network infrastructure consists of three major components the hardware, software, and the networking elements. Lucent Technologies have developed an innovative Wireless Intelligent Network (WIN) for the rapid creation and deployment of enhanced services and advanced roaming capabilities. The WIN platform supports the IS-41 internetwork standard that provides the tools for any time, anywhere roaming. WIN also enhances mobility management and provides superior call routing with support of the WIN Intelligent Driver (WIN ID). The WIN ID database is accessed by the Wireless Service Control Point (WSCP), each time the subscriber places or receives a call. The mobility is managed by transferring the service profile of the subscriber between Mobile Switching Centers (MCSs). The various features of this WIN architecture are software based with an embedded intelligence.
On the second front, the International Mobile Telecommunication 2000 or IMT-2000, an initiative of the International Telecommunication Union (ITU) is emerging as a potent model founded on common standardized flexible platform, which will meet the basic needs of major public, private, fixed, and mobile markets around the world. IMT-2000 recognizes the fact that future wireless access systems will need to provide users with the same high quality and broadband characteristics offered by wireline networks. In years to come, a major segment of the telecommunication traffic will be of mobile voice, fax, and multimedia nature.
It is clear from the above feature descriptions the WIN architecture has to deal with several forms of heterogeneity. For example, different platforms, different data formats, the capabilities of different information services, and different implementation technologies. We need a community of intelligent agents the address each of these issues. These agents must work together cooperatively to accomplish complex goals. They must act on their own initiative, and handle requests from peer agents. To facilitate these activities, Knowledge Query and Manipulation Language (KQML) is designed. KQML has been successfully used to support interactions among intelligent software agents.
The current WIN efforts are basically emanating from the industry-government joint ventures. Academic community has very little role in this major technology development. For example, the world's premier military communications event, MILCOM 1999 which is scheduled for October 31- November 3, 1999 in Atlantic City, New Jersey has no significant participation from academia. The theme of the conference is "Into the Next Millenium - Evolution of Data Into Knowledge." It is recorded that "the military and supporting industry have the opportunity and obligation to leverage these technological advances to create a communications infrastructure that will achieve information superiority and dominance." The technologies that are focused in this world class event will support the information requirements of our "Army After Next" project. It is important and highly recommended for academic community to become part of such future events. We will join hands with our industry partners in creating the intelligent wireless infrastructure that will serve our national needs.
We have identified the promise of a "WIN-like" infrastructure on the global communications arena. We have identified the key issues facing today's military-government leadership and the technologies that fuel military communications in the 21st Century. We have communicated with the major players who are already a part of this playing field and who made promising and measurable contributions. We laid a strong foundation for building the Wireless Intelligent Environment (WIE) with genuine global service capabilities that facilitate for users anywhere, anytime in a cost-effective way.
Visit more industry sites with active project work in the wireless intelligent networks. Develop extensive testbeds that allow controlled experimentation of new tools and technologies in the wireless arena with embedded intelligence. We will provide value-added technologies that will support our nation's current and future information infrastructure.
Page maintained by CST Web Support Technician
Last updated or reviewed on 11/8/10