The extracted information is converted into a standard format for storage in a database during database construction. It is necessary to choose a database model. When doing so, consider the following:
It should be a well-known model, to make replacing one database implementation with another relatively simple.
It should allow for efficient queries, which is important given that source models can be quite large.
It should support remote access of the database from one or more geographically distributed user interfaces.
It supports view fusion by combining information from various tables.
It supports query languages that can express architectural patterns.
Checkpointing should be supported by implementations, which means that intermediate results can be saved. This is important in an interactive process in that it gives the user the freedom to explore with the comfort that changes can always be undone.
The Dali workbench, for example, uses a relational database model. It converts the extracted views (which may be in many different formats depending on the tools used to extract them) into the Rigi Standard Form. This format is then read in by a perl script and output in a format that includes the necessary SQL code to build the relational tables and populate them with the extracted information. Figure 10.2 gives an outline of this process.
An example of the generated SQL code to build and populate the relational tables is shown in Figure 10.3.
When the data is entered into the database, two additional tables are generated: elements and relationships. These list the extracted elements and relationships, respectively.
Here, the workbench approach makes it possible to adopt new tools and techniques, other than those currently available, to carry out the conversion from whatever format(s) an extraction tool uses. For example, if a tool is required to handle a new language, it can be built and its output can be converted into the workbench format.
In the current version of the Dali workbench, the POSTGRES relational database provides functionality through the use of SQL and perl for generating and manipulating the architectural views (examples are shown in Section 10.5). Changes can easily be made to the SQL scripts to make them compatible with other SQL implementations.
create table calls( caller text, callee text ); create table access( func text, variable text ); create table defines_var( file text, variable text ); ... insert into calls values( 'main', 'control' ); insert into calls values( 'main', 'clock' ); ... insert into accesses values( 'main', 'stat 1' );
When constructiong the database, consider the following.
Build database tables from the extracted relations to make processing of the data views easier during view fusion. For example, build a table that stores the results of a particular query so that the query need not be run again. If the results are required, you can access them easily through the table.
As with any database construction, carefully consider the database design before you get started. What will the primary (and possibly secondary) key be? Will any database joins be particularly expensive, spanning multiple tables? In reconstruction the tables are usually quite simple?on the order of dir_contains_dir or function_calls_function?and the primary key is a function of the entire row.
Use simple lexical tools like perl and awk to change the format of data that was extracted using any tools into a format that can be used by the workbench.