Question : Optimizing code for use across a network.

Hello and thanks ahead of time,

  I am new to writing code for Microsoft Access over a network and have been presented with a problem.  The database I am working on was developed by another person, who obviously didn't know what they were doing and didn't bother to find out how to do it correctly.  This Access application is generally installed on two machines in each of our Satellite operations.  The data is stored on one machine while both machines get a front end.  To give you a little background about the business. We buy and sell livestock, specifically hogs, then sort the livestock into categories depending on weight and condition, then sell them.  When a purchase of these hogs are entered into this system they are given a "GroupID" based off of today's date and what category the weere sorted into.  When we do a sale or a "resort" of these groups we select the GroupID from a combo box.  The forms record source is a table called "Activity" which has a field for GroupID (this is only used on resorts, Sales actually store the GroupID in an ActivityDetail table using a slightly different form) this is the controlsource for the combo box labeled GroupID, the recordsource is a query called qrySelectActiveGroups, which limits the values in the combo box to GroupID's that are set active in the inventory table.  Now, when you select a group it populates 6 textboxes with information about that group: Head, weight, cost, Trucking not charged, trucking charged, and trucking expected.  The problem we are experiencing is that the computers that the data isn't on takes literally 3 minutes just to populate the combo box with the values from the query and then another 2 minutes to fill in the text boxes.  Here is the code that does each of these things:

Private Sub GROUPID_AfterUpdate()
'Author: Scott Davis
'Created: 04/16/2003        Changed: 04/16/2003 - 08/06/2003, AI-10/06/2003
'

Dim db As Database
Dim qdDetailSum As QueryDef, rsDetailSum As Recordset

If Not IsNull(Me.GroupID) And Me.Movement = 5 Then
  DoCmd.Hourglass True
    Set db = CurrentDb
    Set qdDetailSum = db.QueryDefs("qryGetGroupSum")
    qdDetailSum.Parameters("[GetGroup]") = Me.GroupID
    Set rsDetailSum = qdDetailSum.OpenRecordset
    If rsDetailSum.RecordCount > 0 Then
        Me.TOTHEAD = rsDetailSum![SumOfHEAD]
        Me.TOTWEIGHT = rsDetailSum![SumOfSHRINKWT]
        Me.TOTCOST = rsDetailSum![SumOfVALUE] + rsDetailSum![SumOfADJVALUE]
        Me.txtTruckingCharged = rsDetailSum![SumOfTruckingCharged]
        Me.txtTruckingNotCharged = rsDetailSum![SumOfTruckingNotCharged]
        Me.txtTruckingExp = rsDetailSum![SumOfTruckingExp]
    End If
  DoCmd.Hourglass False
End If
Call testSave
End Sub

Is there anything that can be done to speed this up?  I am going to update this database to a SQL server db in the future but that might be another 6 months to a year down the road.

Thanks for any help that you can give

Mike

Answer : Optimizing code for use across a network.

Mike,

about all those Choose() statements in the second query - looks like you have a series of move types which can have a direction either into or out of your company (giving cost, weight, etc. multipliers of 1 and -1). How feasible would it be for you to include a new field (e.g. [Multiplier]) in the ACTIVITY table and populate this with either 1 or -1 when the ACTIVITY line is written? If this is possible, use this field as a multiplier instead of the choose statements (then you'll be working out the multplier once for each activity line rather then many times each time it is read).

Also, a golden rule to follow with joining subqueries like this is, where possible, always do the query that gives the smaller resultset first. I don't know how big your INVENTRY table is (if it is very big, can you follow LSMConculting's advice above and archive old inventory?) but it may be worth turning the thing on its head, like this...

qryInventryActive
SELECT DISTINCT INVENTRY.GROUPID, INVENTRY.TYPECODE, INVENTRY.LOCATION
FROM INVENTRY
WHERE INVENTRY.Active = TRUE;

qryDetailValuesActive
SELECT ACTIVITY.MOVEMENT, ACTIVITY.ACTIVID, ACTIVDTL.ACTDTLID, ACTIVITY.IN_INV, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.HEAD AS HEAD, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.weight AS WEIGHT, ACTIVDTL.PRICE, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.cost AS COST, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.SHRINKWT AS SHRINKWT, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.VALUE AS [VALUE], Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.ADJVALUE AS ADJVALUE, ACTIVDTL.PLUG, ACTIVDTL.GROUPID, ACTIVDTL.OLDGROUPID, ACTIVDTL.ADD_DESC, ACTIVITY.contactid, ACTIVITY.LOCATIONID, ACTIVDTL.TYPECODE AS DetailType, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.TruckingCharged AS TruckingCharged, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.TruckingNotCharged AS TruckingNotCharged, Choose(activity.movement,1,-1,-1,1,1,1,-1)*activdtl.TruckingExp AS TruckingExp
FROM ACTIVITY LEFT JOIN (
    ACTIVDTL INNER JOIN qryInventryActive
        ON ACTIVDTL.GROUPID = qryInventryActive.GROUPID
    )
    ON ACTIVITY.ACTIVID = ACTIVDTL.ACTIVITYID
WHERE (((ACTIVITY.IN_INV)=True))
ORDER BY ACTIVITY.ACTIVID, ACTIVDTL.ACTDTLID;

If the nested joins cause too much trouble, make the first query join ACTIVDTL and INVENTRY, then join this query to ACTIVITY.

Assuming the resultset from qryInventryActive is quite small (say, 10s) and that ACTIVITY.GROUPID is indexed, the nested join should complete quite quickly. This will in turn have a small resultset and the top level join will also complete quickly if ACTIVITY.ACTIVID is indexed.

If my assumptions are correct, you'll still be bringing the same amount of data over the network, but the join will be looking through 10s of unindexed rows returned by the subquery rather than nearly quarter of a million!

As an aside here, the above speal is based on the argument that subqueries are not indexed so you always need them to return as few records as possible - if anyone knows of a way of indexing subqueries (or if they maintain any existing indexes from their tables) please let me know!!!


Another option available to you, if you have a bit of time on your hands, is to re-write this section of the front-end using ADOX in place of DAO and linked tables. This allows the server side Access database to act as a genuine server with server-side queries behaving as stored procedures so you wont have to pull massive amounts of data across the network. Setting the system up like this is too complicated to go into here, but M$'s Knowledge Base (search for Product: Access; Solutions Containing: ADOX) gives useful information on the subject (I set my first ADOX system up using their guidelines).

Good luck with your hogs!

s46.
Random Solutions  
 
programming4us programming4us