This FAQ is divided into two sections:
Also (at least for the moment), there's a section on questions to be addressed by this FAQ in the future:
NOTE: Specific information on how to perform certain functions in the technical part of this FAQ assumes that you are using Paradox for Windows version 5. Users of other versions may discover that some functions are executed differently in later versions. (Note: You're on your own with this; only PDOXWIN version 5 is officially supported.)
Tip RE: how to use this FAQ: Remember, you can use the "find text" function of your browser to search for a specific word or phrase in this document!
What is the Monitoring Database?
The Hawaiian Natural Resources Monitoring Database is a software package designed as a tool for data entry and analysis for resource monitoring by land managers in Hawaii. The Monitoring Database is designed in the award-winning Paradox for Windows environment, chosen because of its combination of ease of use for end users and flexibility of custom design (programming) capabilities.
What is the purpose of the Monitoring Database?
The purpose of the Monitoring Database is to facilitate standardized and fully-documented data collection efforts by federal, state, and private agencies. The Monitoring Database features a well-structured, completely relational design, together with a custom user interface designed for ease of use of this powerful system. The system ensures internal data integrity, disallowing certain types of data entry mistakes from the start, and providing a mechanism to ensure data quality and consistency at data entry time. If used correctly, the Monitoring Database forestalls the problems caused by ambiguity and conflicting data which otherwise would be discovered (if at all) only during data analysis (sometimes years after data collection and entry). Additionally, if standard data collection protocols are followed, data collected by various agencies in different areas will be comparable, enhancing the value of each agency's work by allowing comparison of data to that collected by other agencies. This comparability will allow a "big picture" approach to analysis of this type of scientific data never before possible in Hawaii.
How did the Monitoring Database get started, and
what does the future hold?
The first precursor to the Monitoring Database
was an idea to put into a database information on all plant
and invertebrate taxa in Haleakala National Park (Maui). Information
was converted from word processing documents and a database
was formed. Other related offshoot projects include a taxon-linked
bibliographic tracking database, an alien species database ("Harmful
NonIndigenous Species" [HNIS]), a database of
plant pathogens in Hawaii, a database used for tracking the Federal
Endangered/Threatened status of Hawaiian species, and a prototyped,
soon-to-be-developed database for tracking feral animal control
efforts. The main infrastructure common to all these databases
(taxon information) was designed to be compatible with that of
the Botany Department
of the BerniceP. Bishop Museum
(Honolulu). (The museum's data structures are based on international
biological database standards.)
The actualization of the Monitoring Database
was sparked by a need of Guy Hughes (then with TheNature Conservancy of Hawaii
[TNCH]
[Maui]) to analyze data and incorporate information and provide
graphic output of complex analyses. Hughes' field methods were
modified versions of those set forth in a document compiled
by Pat Dunn. (Hughes' methods are referred to throughout
the Monitoring Database documentation as the "modified
Dunn protocol.") Recently, the idea of creation of a standard
monitoring protocol was proposed at a meeting of the East
Maui Watershed Partnership (EMWP) for the purpose of gathering
data in the geographic region with which EMWP is concerned.
Since then, TNCH
has used the Monitoring Database to incorporate data from
tests (for EMWP) in Waikamoi of new, somewhat modified field
methods. Based on discussion of these and other methods, eventually
a handbook of suggested guidelines for field methods in Hawaii
is scheduled to be created by the Hawaii Natural Resources Monitoring Working Group
(sponsored by the Hawaii Conservation Alliance [formerly known as the Secretariat for Conservation Biology]).
By 1996, all organizations using the Monitoring
Database were using a completely standard version of the
Monitoring Database (i.e. the main, non-custom relational
infrastructure was the same). Also, all Monitoring Database
sites use standard nomenclature (based on Bishop Museum's standards)
and standard species identifiers (taxon codes). (Local site administrators
can keep their databases "in sync" with the standards
via information provided to them via an internet list server.)
Anew version of theMonitoring Database
is currently (Nov. 1997) under consideration, which-based
on input from current users-will enhance ease of use and provide
more flexibility. The proposed changes
will improve upon some data structures (taking advantage of new
database engine features); allow the Monitoring Database to better
accommodate a wider diversity of uses (e.g. integration of
invertebrates into currently plants-only USFWS Endangered &
Threatened species tracking database); allow more flexibility
in data entry (allow more non-numeric values); and improve upon
certain procedures relating to standardization of data (e.g. a separate
field indicating whether taxon codes/taxa are "standard"
or "user-defined"). However, when (or if) the new
version will be created/implemented is contingent upon funding
availability for development personnel.
Where can I get more/updated information about
the Monitoring Database?
For the latest news about the Monitoring Database, subscribe to HIMONDBL, the Hawaii Natural Resources Monitoring Database user's list (see What about technical support? for details). Monitoring Database information, news, updates, and current and historical versions of the documentation are available for download from the worldwide web.
How can I obtain a copy of the Monitoring
Database?
Licenses to use the Monitoring Database are available free of charge to qualifying agencies, organizations, businesses, private landowners, educational institutions, and individuals. Use of the Monitoring Database by a wide range of audiences is encouraged. Licenses granted are licenses for USE of the Monitoring Database software; they do NOT transfer ownership of the software. The software may not be sold or redistributed in part or in whole except as explicitly detailed in the license agreement. One of the main reasons for this is to ensure that all users of the software are known to the Monitoring Database System Coordinator, so users can be apprised of updates to the software.
What about technical support?
Information about Monitoring Database technical support is available in the technical section of this FAQ document.
What about
technical support?
For technical support, it is hoped that users
will share their expertise with users in their own and other organizations,
as well as participate in the evolution and development of
the system by providing suggestions to and working with the the
Monitoring Database Project Coordinator via each organization's
Database Administrator. At this time, technical support for
the Monitoring Database is available to each site's Monitoring
Database Administrator ( seenote)
from theHEAR project.
Additionally, a general Monitoring Database internet mailing list
is available for users to receive the latest information
about the Monitoring Database, as well as to ask and answer
questions, provide and receive insights and tips, and have discussions
with other users. Subscriptions to the HIMONDB-L list may
be requested by sending email to LISTPROC@HAWAII.EDU
with a BLANK subject line, and the contents of the message
being "SUBSCRIBE HIMONDB-L your name" (no quotes)
from the email account to be subscribed. After subscribing,
you can send correspondence to the group at HIMONDBL@HAWAII.EDU.
There is also an internet mailing list for Monitoring Databse administrators.
The HIMONDBA-L list is for communication of information to
(and among) Monitoring Database System Administrators. This is
the "official" means of communicating to Monitoring
Database sites updates to standard data. Changes/corrections/additions
to standard data/codes, software updates, and other system administration
information are distributed on this list. Typically, only one
person per site/organization is subscribed to this list. The information
is typically technical, and large attachments are sometimes sent.
Subscriptions to the HIMONDBA-L list may be requested by
sending email to LISTPROC@HAWAII.EDU
with a BLANK subject line, and the contents of the message
being "SUBSCRIBE HIMONDBA-L your@emailaddress your name"
(no quotes; substitute your info for lowercase text) from
the email account to be subscribed. After subscribing,
you can send correspondence to the group at HIMONDBAL@HAWAII.EDU.
Support for Paradox for Windows for questions not
directly related to use of the customized portion of the Monitoring
Database is available direct from Corel Corporation.
General information about Corel and Paradox for Windows is available
at Corel's Paradox website.
Corel provides free downloadable, searchable Paradox manuals online.
Currently, installation support from Corel is free, and there
is a free online technical support library
for the latest version; other technical support is pay-per-call
or by contract. Another support service provided by Corel is a threaded discussion list regarding Paradox topics.
Additional information about Corel's technical support for Paradox
is available at Corel's Paradox technical support web page.
HEAR also sponsors an internet mailing list for Paradox for Windows (ObjectPAL) programmers.
The membership of this list consistently comprises over 100 ObjectPAL
programmers from around the world; a good response rate
to reasonable questions is the norm. Subscriptions to the OBJECTPAL-L
list may be requested by sending email to LISTPROC@HAWAII.EDU
with a BLANK subject line, and the contents of the message
being "SUBSCRIBE OBJECTPAL-L your name" (no quotes)
from the email account to be subscribed. After subscribing,
you can send correspondence to the group at OBJECTPALL@HAWAII.EDU.
NOTE:To ease the time burden on the Monitoring Database Project Coordinator,
ONLY an organization's Database Administrator should contact the
Monitoring Database Project Coordinator directly with PDOXWIN/Monitoring
Database questions. Other users should first attempt to resolve
questions from within their own organization (ask your Database
Administrator or other designated inhouse support person).
If the problem cannot be resolved in this way, the fully-apprised
Database Administrator should then contact the Monitoring Database
Project Coordinator for support.
Training is available for Monitoring Database administrators-both onsite at your place of business, as well as through training sessions such as the HEAR-sponsored 1997Monitoring Database Administrator's workshop (which may become a regular event, given sufficient interest).
Why
is the Monitoring Database so complex? Wouldn't it just be easier
to use a spreadsheet?
The short answer is: data integrity, data integrity,
data integrity!
A well-designed database should be a model of
some aspect of the "real world"-i.e., the data
entities (tables, fields, data values) should reflect some reality
(e.g. if you count 7 plants of a certain species,
the data value you record is "7"; if you're working
in plot #1, you tell the system you collected data in
plot #1; if Lloyd L. Loope collected the data, an entry
which corresponds to him [e.g. data collector="LLL"]
is made in the database).
Likewise, the relationships among data
entities should reflect some reality: before you enter a new Species
in the TAXA table, the system must already have information about
the Genus; before you tell the system that "LLL"
was the data collector, the system must "know" (you
must have told it) that "LLL" exists (and know some
information about "LLL", such as that the person's Name
is "Lloyd L. Loope"). These relationships should intuitively
make sense: they are based on real-world relationships (every
species has a Genus [e.g. Bidens for Bidens alba],
and there are no species which do not have a Genus; every data
collector has initials [e.g. LLL for Lloyd L. Loope], and
every data record has a data collector [whether or not you know/remember
who it was]). (Refer to the table relationship diagram
to see a graphical representation of the relationships among data
entities established by the Monitoring Database).
The relationships (e.g. "'LLL' represents
'Lloyd L. Loope'") and integrity checks (e.g. "every
Species must have a Genus") described here comprise
what are referred to as "business rules" in the business
data processing world. There are "business rules" for
natural history data collection (as exemplified above), as
well. SPREADSHEETS DO NOT ENFORCE THESE BUSINESS RULES;
a properly-designed relational database does.
What this means is that if you are entering data
into a properly-set-up database, you cannot be sloppy about
data entry: you are forced to follow the "business
rules". This shifts the "buck stops here" time
to data entry from the more typical time, data analysis time.
Field managers are often used to being able to "get away
with" not making crucial (and hard!) decisions "up front."
It may seem such a relief to get the field work done
that you feel like you're "through" with data collection
before you pack up your vehicle to return to camp that last day.
And, of course, data entry is boring and time consuming; there's
always something more pressing (=exciting) to be done. Unfortunately,
this shifts the decision-making time to data analysis time, which:
(1) is often done well after-sometimes years-the data
collection incident, (2) is always done during a time-crunch
[yes?]. What this means is that-even if the data collection personnel
are still around--it's hard to remember all the details (of every
single data point for every species at every station) of what
happened during data collection. So, if there's a question
(e.g. "What did I/they mean by 'Drx sp ( yel)'?",
or "Did I/they mark that out, or is that a smudge?"),
it's hard-to-impossible to accurately answer it at data analysis
time (and therefore it's easy-or the only solution-to ignore [or
make up
surely not!] data). ("And besides, who has
time to track it down, anyway?")
The upside of what this means is that-even years
later-you, your managers, the next generation of field staff,
and cooperating agencies can feel very confident in the quality
and completeness of the data that's available on your system.
(Of course, completeness can only be assured if you've
filled out appropriate "metadata" [information about
your data, e.g. "LLL" = "Lloyd L. Loope"].
You can ensure that you don't forget about things like this by
building them into the "business rules" of your system,
and having the database enforce these rules at data entry time.)
Data entered into a well-structured relational database system is always instantly available to answer even the most complex questions. The data relationships will make sense to the well-informed manager, and the data is in a format conducive to asking questions "from all angles." Also, the fact that your system's rules are enforced from the get-go means that you know what assumptions you can make about the data when you're constructing queries: you are assured of "referential integrity"-e.g. you won't all-of-a-sudden discover that you don't know who "LCM" is. (Again, NONE of this can be assumed about any data entered into a spreadsheet.)
No matter how carefully meticulous you are,
you cannot guarantee the integrity of your data
unless you have mechanisms in place to strictly enforce your system's
"business rules." (Just ask anyone who's ever converted
a serious spreadsheet to a relational database system! This
process will bring to the surface all sorts of things you wouldn't
have anticipated!)
What if the data entry screen for a particular
data set type forces me to enter a value for information that
I don't have available? How do I deal with "missing data"?
If you have a situation where data is missing but
is required by the system (for entry of field data), you must
realize that in order to accommodate exceptions, your METHOD (data
set type description) is no longer standard; it-as well as
the "business rules" of your system-must be modified
to explain the discrepancies that you are introducing into the
data set. If you are using one of the standard data set types,
you must realize that you are NO LONGER adhering to the standard
methodology. You should define a new data set type (which can
be based on an existing data set type) and include in its description
the specific situations in which you may have missing data, how
these exceptions are handled in your data set, and how these situations
are to be handled during data analysis. You then need to create
(or have created) a new data entry screen (form) for
this method, with validity checking ("business rules")
modified to allow exceptions for missing data. (Of course,
if you are using a nonstandard [custom] method/data set type,
you may create your own data integrity rules-specific to your
particular data entry form and/or table. But you should still
acknowledge in your method/data set type description the specific
situations in which you may have missing data, and how this situation
is handled in your data set, etc.)
One aspect of "business rules" incorporated
into databases (as discussed elsewhere
in this FAQ) is that they force the user to acknowledge exceptions.
For example, if you have plots including 4 subplots,
your data collection method description could be that you "counted
each species in each subplot." A data analysis method
then be to "add all subplot values for each species for each
plot, then graph this total by species." (Simple, easy!)
However, if you allow data to be SKIPPED, things
become much more complex. You must then acknowledge this in a
more involved description of your methodology, e.g. that you then
"counted each species in each subplot; however, in some cases,
species counts for certain subplots were not available."
Additionally, you should devise a method to distinguish accidentally-skipped
data entry errors from deliberate decisions to identify a data
point as "no data available." This situation creates
the necessity for a more involved data analysis method: no longer
can you write a simple query to "sum all subplot values by
species" (a 1-line single query); you must then decide how
to deal with the omitted data (carefully documenting this in your
method description). You may now need to write more than one query-or
a simple script (program) to deal with the problem created by
this. For example, you could decide to just omit from analysis
plots/species which have any missing data. Then, however,
you may need to account for at least presence/absence of a species
in a plot even if you don't have numeric data for it-which adds
even more to the complexity of both data structure and analysis.
Alternatively, you may decide-in order to have across-the-board
"comparable" plot values--to create a "weighted
sum" for each plot/species by taking the average of the existing
subplot values (if there are fewer than 4) and substituting
that average for each missing value (i.e. for a subplot with
one value missing--the remaining values being 14, 18, and 6-you'd
add the existing subplot values and divide by the number of valid
values [thus (14+18+6)/3=12.66
], then add the rounded
result 4-n times to the sum of the n valid values,
giving [14+18+6]+13=51 as your total for the plot/species). Of course,
this only works if you have at least one valid subplot value
for a given species in the plot (i.e. if the species was present
in the plot, but NO subplot values were taken, you have no data
upon which to create a modified plot value); you must also, of
course, account for these exceptions in your data analysis method
(and methodology writeup). (Not simple, not
easy!)
"Missing data" situations such as this will inevitably occur in the real world; however, it must be acknowledged that such exceptions imply nontrivial changes to documentation, method, and analysis. You should consult your organization's main Monitoring Database Administrator-and probably a database designer-before making final decisions about how to handle specific "missing data" situations.
Why is it suggested that tables created by EXTRACT are used for queries instead just using the FIELDATA table directly? What exactly does the EXTRACT function do, anyway?
There's no reason that you CAN'T use the FIELDATA
table for queries; you're certainly welcome to. However, FIELDATA
field names are generic, and therefore do not adequately describe
the data contained in the field: the field definition is
determined by the setup of the Data Set Type (field name correlations
should be documented in your DATASTYP table). (Further discussion of FIELDATA field names
is available elsewhere in this FAQ.) Also-and maybe more importantly--there
are many situations (e.g. if you need any mathematical operation--e.g.
sum or average--ACROSS "values" [i.e. value001, value002,
...valuennn], or if you must select anything based on multiple
"values") where it is MUCH simpler to use the extracted
data.
All the "extract" routine does is create
real-world field names (e.g. "Plot" vs. "MU level
3") and "normalize" the data (creates a single
record for each "value"). It does NOT manipulate the
data values in any way, shape, fashion, or (dare I say it?) "form"!
;)
For example, a FIELDATA record with the following
values:
Project | ... | MU level 1 | MU level 2 | ... | Value001 | Value002 | Value003 | Value004 |
HEAR | 1 | 2 | 3 | 7 | 2 | 4 |
...would simply be transformed into *4* records (one
for each Valuennn above):
Project | Transect | Plot | Hit count |
HEAR | 1 | 2 | 3 |
HEAR | 1 | 2 | 7 |
HEAR | 1 | 2 | 2 |
HEAR | 1 | 2 | 4 |
That's all it does!
There are certain queries you wouldn't even want
to begin to create on FIELDATA (esp. when the number of Values
gets higher; ask Coleen if you don't believe me! She's been through
this before...); e.g. -- show me all species which have a Hit count of greater than 5
in ANY plot... then, change that, and show me all species which
have a Hit count of greater than 10 in any plot... Think about
it! (esp. if you had 20 or so values...). It is completely impractical
do perform some types of queries on data structured like the FIELDATA
format. (See also discussion in this FAQ RE: therationale behind FIELDATA's structure.)
(Don't want to use Paradox's queries, graphs, or its other tools
to do data analysis? See the discussion of exporting Paradox tables to spreadsheets in this FAQ.)
The other thing--and nearly as important--is that
your queries will make MUCH more sense if you (or someone else)
looks at them 6 mos or so after you initially wrote them, because
they use field names which actually make sense for their data
set type (method).
Also, queries on extracted tables are MUCH quicker...
they don't have to wade through the irrelevant data which doesn't
correspond to the data set type you're analyzing.
The reason FIELDATA exists is so that you don't have to keep up with all the data set types (methods) that you've created in case you want to ask a question about anywhere you've ever seen/any data you've ever collected about a particular SPECIES. (Also, it would be very tedious to do data entry into [or write programs for data entry into] completely normalized tables, like those created by the EXTRACT routines.)
So, why not just create multiple normalized tables (one for each data set type/field monitoring method)?
There are several reasons for not doing this. In fact, this question addresses two issues which need to be addressed separately: (1) multiple tables; and (2) normalized data tables. On reason for not breaking the data up into separate tables is that if you can put all data in a single table (with each data set type indicated by a field value in each record), you can easily ask questions BY SPECIES on your ENTIRE DATA SET--e.g. "show me ALL data points I have for [species x]." If a new table were created for each data set type, you'd have to remember EACH TIME you create a new table to update ALL QUERIES ("now, what were all those queries' names...?" of this type to include every new table. In addition, keeping the data in a single table, it better ensures that your data is in an understandable format, and allows the Monitoring Database infrastructure to work for you: metadata structures for all fields are already set up (and, hopefully, procedures are established in your organization to ensure that these metadata fields are completely filled in), so (1) you have somewhere you can put the metadata; (2) you can't (easily) omit metadata documentation (essential for long-term usability of your data); (3) you can always use a standard set of procedures to document your data set. (It may be "easier" in the short term to set up an arbitrary table for each data set type, but the reasons given above sway me towards the little up-front thinking required to ensure long term data viability.) (THRETMON is an exception, since parts of the data entered into that table are qualitatively different than the data entered into FIELDATA; i.e. THRETMON data values are non-numeric. THRETMON data values could be entered into FIELDATA if they were translated [by the data enterer OR by the form] into numeric values at data entry time; but, believe it or not, I hate "codes" as much as you do!) If you are creating a new table for a data collection method (data set type), it is STRONGLY recommended that you follow the suggested procedure for doing so (basically, by copying the structure of the FIELDATA table & basing your new tableon that structure) whenever feasible. (If this is done, your new table and FIELDATA are essentially--from the database design perspective--the same table. Obviously, however, you still run into the problem of having multiple physical tables, which causes the problem mentioned above RE: updating across-the-board species-based queries.) Regarding normalization, you might recall that I have mentioned that, in general, normalization is a good thing. In general, it is; however, there are exceptions based on "extenuating circumstances". The exception in this case is due to the fact the tables' initial purpose is to allow easy DATA ENTRY. It would be rather difficult to enter data into a completely normalized table using a simple form, and rather difficult create a form to allow data entry into a completely normalized table. The only part of the data in FIELDATA (and THRETMON) that is not normalized is the set of data values (Value001, Value002, ...ValueNNN) for each record; everything else about the data IS normalized. The main benefits of normalization are (1) ensuring nonredundancy of data [which is handled sufficiently by the existing structure] and (2) ease of querying. The EXTRACT function built in to the Monitoring Database (for each data set type) allows you to create a completely normalized table use it--in a "read-only" situation--for your queries. (Note: It's "completely normalized" for all practical purposes. Purists will notice that the existence of the "notes" field in the EXTRACTed tables violates the strict definition of perfect normalization; they will also realize why I left it there [for simplicity's sake].)
How do I set up Paradox to work on a network?
There are several crucial setup items that Paradox for Windows requires in order to be able to use it over a network. These instructions will assume that there is a local copy of the Paradox program on each PC, but that users are sharing a database (tables/etc. in a shared directory). These things must be set up for every machine accessing the data, including the local machine where the data resides.
Learning a bit of the theory involved will help you understand why network setup is required, and how Paradox handles multi-user access to its data (or, you can just skip to the nitty-gritty practical part).
The theory: Paradox for Windows (PDOXWIN) enables concurrent access to data in a database by multiple users on a network (or by multiple sessions on the same machine, provided that each session has an independent "PRIVATE" directory). In order to allow this access without allowing for the potential for corruption of data (example follows), PDOXWIN must keep track of data being accessed, and know who has control of the data. PDOXWIN allows locking down to the record level; in other words, it tracks and coordinates control of data down to the level of each ROW in each table. PDOXWIN allows users to simultaneously access information in a particular record, but only one user can change that record at any one time; this scheme ensures data integrity among users. (Other [higher] levels of locking are available programatically as well [e.g. table]; I'll be discussing only the finest locking level in this section.)
For example, if three users (Coleen, Roy, and Pua) were accessing the Monitoring Database on a network, they could all three view the data in a particular record (e.g. the "MicCal" record in TAXONCOD table). However, at any given time, only ONE of them could be modifying it; e.g., PDOXWIN will implicitly lock the record as soon as Roy (while in edit mode) changes the value of any field (e.g., the "In-house taxon name") on his screen. Coleen and Pua could still view the record with its ORIGINAL value intact until Roy posts the record, at which time the value of that field would AUTOMATICALLY be updated on Coleen's and Pua's screen (nearly) simultaneously. (Try this sometime! It's a kick!) However, until Roy posts the record (implictly or explicitly), neither Coleen nor Pua can place a lock on that record--thus precluding any of their attempts to modify that record (they'll get a "record locked by [user]" message if they try, and system will ask them if they wish to wait 'til the other user has posted the record, or whether they wish to abandon the lock attempt). One implication of this is that if two users were trying to change a record more-or-less simulateously, the one who "got there last" would ALWAYS have the opportunity to see the effect of the other user's change before attempting to change the value him/herself.
(Note: Records are locked implicitly when a user attempts to change a field value in that record [assuming that the user is has the table in edit mode, and has the appropriate security and network access]. Records can also be explicitly locked by the user [F5]. Records are posted implicitly when the user leaves the current record [PDOXWIN must post/unlock the record--or cancel changes/unlock--before leaving a locked record; this is why you don't ever have to explicitly save data in PDOXWIN tables]. Records can also be posted explicitly by the user [use the "Post changes" button, or Shift-F5/Ctrl-F5].)
In short, network setup features are necessary in order to allow PDOXWIN to maintain data integrity by managing concurrent access to data by multiple users.
The mechanism by which PDOXWIN achieves this management is to keep track of which user has which tables/records/etc. locked in a commonly-available network control directory on the network (i.e. all PDOXWIN network users must have WRITE access to this directory). Each network-capable PDOXWIN installation must be "told" where this network control directory is on the network (via the setup procedure described in the "practicalities" section), and must have WRITE access to this directory before starting a network session (i.e. opening a database which is available to other network users).
If the system is not able to obtain appropriate access to the specified network control directory, the user will have the option to either abort the attempt to open the database, or to resume opening the database without network-access capabilibities. In the latter case, this user will have exclusive access to that database until his/her database session is ended. This means that no one else on the network can access this database until that session is terminated (PDOXWIN closed, or that user switches to another working directory). Obviously, this isn't typically a good thing to do if multiple people are dependent upon the database.
The program which manages network data access (among other things) is called IDAPI (Independent Database Application Programming Interface; yeah, right! whatever...). IDAPI gets its information about your specific installation from the IDAPI configuration file (usually the only file on your system named IDAPI.CFG). You can see/change the contents of this file by using the IDAPI Configuration Utility (an icon for this should be in your "Paradox for Windows" program group; otherwise, find & run the IDAPICFG.EXE program on your system).
(Note: There can actually be multiple configuration files available to PDOXWIN [and they don't have to be named IDAPI.CFG]. There should probably only be one IDAPI configuration file on your system [named IDAPI.CFG, unless you've explicitly renamed it and/or created additional configuration files], and your system should be set up to use that file [PDOXWIN setup takes care of all this for you initially; you should never have to mess with all this stuff unless you've explicitly changed it, OR PDOXWIN has been {inadvertently, I would hope!} installed on your system multiple times]. FYI, the way to check on/change the IDAPI configuration file your system is using, use the "Local settings utility" [again, an icon should exist in your "Paradox for Windows" program group; otherwise, find & run the PWLOCAL.EXE program on your system].)
Your IDAPI configuration file is the place that PDOXWIN stores the information RE: which network control directory to use for your PDOXWIN session. The information in the IDAPI configuration file that indicates the name of the network control directory to be used is the "NET DIR" parameter of the "PARADOX" driver, located on the "Drivers" tab in the IDAPI Configuration Utility. The directory specified here must be the SAME directory specified in the IDAPI configuration of each other user on the system who will be sharing network data with you (more details in the "practicalities" section); remember, this directory is where PDOXWIN (or, technically, IDAPI) keeps track of all network users' data accesses.
So, ready to set it all up now? (Don't worry, it's not as bad as it sounds!)
The practicalities: Don't be intimidated by the length of this section. If everything always worked "as planned" it could be a lot shorter. But, I've at least tried to cover all the contingencies I can think of--so that makes it longer. Anyway, without further ado, here's the step-by-step:
After rebooting, your server should then recognize "P:\" the same way your client systems do.
WARNING! I have NOT tested the "SUBST" method on Win95b (the newer [late 1997] "MS Internet Explorer" version of Win95) (or higher versions of Win95, or WinNT v.4 or higher). Rumor has it that there are restrictions on running DOS programs in certain Win95b configurations (i.e. FAT32); I have no idea what effect this might have on the use of the SUBST command in AUTOEXEC.BAT. I would be very interested to hear of your success (or lack thereof) with this method in various configurations of higher-release versions of Win95. (There may be a better way to do this anyway, using the Win95 registry; I've just never pursued this, 'cause the "SUBST" method has always worked for me.)
WARNING! I have heard of conflicts with other software that use "SUBST" can cause (although I have never encountered any problems on any system that could be traced to this). In case of any system or software problems which occur after making this change, you might try to temporarily REM the SUBST command(s) in your AUTOEXEC.BAT & see if this makes any difference. Just wanted to mention this so you'd be aware of it.
IMPORTANT! Remember that the system selected to be the (network control directory) server must remain up & accessible to all network users at all times.
IMPORTANT! All users who will be accessing the PDOXWIN network data must have full WRITE access to the network control directory.
IMPORTANT! The specified directory must be the same physical directory for all PDOXWIN network users.
IMPORTANT! The specified directory must be specified using the exact same literal string value for all PDOXWIN network users. (Admittedly, this seems overly-restrictive, but that's the way it is.) For Windows-networked systems, this has never caused a problem for me; it just requires a little forethought in network setup (and individual system setup), and CONSISTENCY of implementation of the decided-upon setup standards (see details).
(Note: PDOXWIN is a "16-bit" program, and does not support long filenames. On systems that do support long filenames [e.g. Windows 95], if there are any "long filename" elements [directories or filenames not conforming to the DOS "8.3" standard, i.e. with >8 characters, or containing spaces] in the path to your file in question, these will be shown using the "DOS [short] filename" format, which will probably include a tilde ("~") as the second-to-last character of any element [directory or filename] which has a non-"8.3-compatible" name. Check your operating system's documentation for further info about this.)
Hi Philip...
Is there any way a PDOX report (in either tabular or graph format) can be exported to a MS Word document? I've looked in the on-line help and don't see any reference to doing so. Is there some intermediate step that could be done (like routing it through another program) that could eventually get it to Word?
...Coleen
Well, after a little research (which it seems I've done before with the
same results), it looks like I'm back to the old "do a screen dump and
insert it as a graphic." This works fairly well (as exemplified by the
attached PDOXWIN report in a Word doc), though the resolution may not be
as crisp as the original. It may also depend on the resolution of your
screen when you do the screen dump.
Basically, the procedure is to display the report on your screen (highest
possible resolution for your screen is probably best; ask your local PC
guru how to set this); then (at least on Win95), press the Print screen
button (which copies what's on the screen to your clipboard buffer), open
your favorite graphics program (PhotoShop, LViewPro, or even good ol'
Paintbrush [MSPaint]), and "Paste" the clipboard as a new image. Then
crop, tweak as desired [e.g. change color depth to "2" if B&W only], etc.
& save as a .TIF file (or other format that Word can import). Use Insert | Picture in Word to insert the picture. Resize/arrange as desired.
Remember, a "report" from pdoxwin is basically just a graphic image (as
is any non-text-only output to a printer). The problem is having a printer
driver that will output in a format importable to Word.
On the surface, it looks like the "EPS" option is a good one: Word can
insert EPS graphics (supposedly), and PostScript printer drivers can
usually print to EPS files. However, I've never been able to make this
work. Suggestions/solutions from anyone reading this are solicited!
For highest quality, if the reports can be "standalone" (each on a
separate page, not integrated with other text), a way for final
presentation (for all practical purposes, non-editable, except for very
minor corrections) is to print both the Word doc & the PDOXWIN report to
.PDF files (via Acrobat Exchange software), then merge the pages as
appropriate in a final .PDF document. This also has the advantage to be
universally readable by anyone w/ access to the FREE Acrobat Reader
software (and--of course!--postable to the web). This is a good solution
for FINAL PRODUCTS, but not as good for "living documents" (since they
have to be patched together again each time all but the most minor
changes are made).
I've got Paradox installed, and a copy of the Monitoring Database on a disk. How do I initially set up my system to make the Monitoring Database run? (What aliases do I need to configure? How do I do this? [What are "aliases", anyway?])
(explain)
How does one set up a field method to conform to the "standard structures" set up in the Monitoring Database? At what point should I create my own custom table instead of trying to work within the existing Monitoring Database structure?
(explain)
What do all
these strange field names in the FIELDATA table mean?
(explain)
How does the MONUNITS table work?
(explain why/how multiple monitoring unit levels are entered into the same table; explain auto-increment features for data entry [create one with increment level of "1"and/or user-defined{?}]) (tools to automate seemingly repetitive info)
Why is there so much data entry required for the DATACOLS table? Isn't there an easier way?
(explain the "info at the appropriate level" concept [again]; mention ability to entry of automate 2nd + subsequent DATACOLS sets; mention upcoming automated routine) (tools to automate seemingly repetitive info)
Paradox sometimes crashes for seemingly no reason.
What gives? How do I fix it and keep it from recurring?
(SAVEPDOX; correct configuration [works for WFW & Win95])
When I'm creating queries, what does the "check
mark" do? What's the difference in the "check mark"
and the "check plus"?
(explain multiple functions of check mark; explain difference in check and check plus; refer folks [generally, and w/ specific help topic to look up] to online help for more info)
How do I know what tables to link for a query
on the Monitoring Database?
(use table relationship diagram; explaing concept of linking related [key] fields)
What's the correct procedure for adding new species
and taxon codes to the Monitoring Database?
(explain)
How do I update the Monitoring Database to reflect changes in taxon codes/species (e.g. if, after initial data collection/entry, we identify a previously unidentified species [i.e. a species which formerly had a "loc." rank in the TAXA table)?
What are the key features of the Monitoring Database system?
(itemize: standard taxonomy; good relational structure; full metadata; ease of querying; consistency of data; validity checking; data integrity; systematic update of nomenclatural/taxon code info; can export to anything; it's free, and comes with free support!)
How can I figure out why PDOXWIN won't let me delete a record (e.g. from TAXONCOD table)?
(to see which tables may contain data affecting this operation, look at table info, dependent tables...)
But I like spreadsheets; where do they fit in?
There's no reason you shouldn't use spreadsheets for data analysis.
(PT, answer: what they're NOT for; what they CAN be used for)
Hey! Paradox version [xxx] is already out! Why do we have to use version 5?
(pt: get this info from recent e-mail to Eric N.)