>Well, at least you're here to give correct technical descriptions of the problem in a printable format that we can give to said moron and watch his demeanor change rapidly.
>
Left out a couple of things:
>>>
>>>"the problems experienced by the product when run from a networked machine appear to be caused by the products' utilisation of the NBT protocol. NBT (NetBios over TCP/IP), is a fairly simple protocol, that propagates network broadcast packets when invoked. When a user connects to network share offered by the server, this opens
>>>a NBT connection to the server. This NBT connection appears to be the limiting factor when a process is initiated in that server via the workstation. When monitoring the NBT connection to the server, the effect of initiating a Fueltrac process is an immediate increase of NBT traffic between the two machines to 100%. This suggests that the neither the server, or the workstation (client), are the limiting factor responsible for the massive increases in response time
>>>from the server, rather that the NBT traffic is. To test this, the server and local workstation were monitored for processor activity and memory activity during the process of report generation, and neither machine showed signs of stress, however NBT traffic rose to 100% immediately. As a test NBT was disabled on the server (and workstation), and an attempt to generate a report was made. This attempt was unsuccessful. This suggests that the software itself dictates that the NBT protocol should be utilised. The server and workstation were capable of using alternative protocols, but none
>>>were used.
He seems to be very confused here; he describes the behavior of NetBEUI but attributes it to NBT. NetBEUI is not routable and operates as a broadcast protocol on the local subnet; NBT, which relies on TCP/IP, is routable and relies on WINS/DNS services or explicit routing at the system for transmission. Check for a lower limb hanging from the guru's snout - his foot is well down his throat and may actually have reappeared from a posterior position.
The one thing he's generally correct about is that the wire (network bandwidth) is probably the limiting factor in this situation. If this is running on 10Mbps Ethernet, it's almost certain that the network bandwidth limits throughput - on a good day, 10Mbps Ethernet isn't likely to sustain a data rate of .5MB/sec - a far cry from the 4-10MB/sec sustainable DTR of a good high-performance drive. With light contention, that drops radically; on a bad day with a large subnet, I've seen throughput drop to next to nothing - 10Kbps during heavy contention from lots of stations on a shared subnet. Reviewing the cable plant design here could be in order, especially in a multi-server environment - switches rather than hubs, or breaking up the LAN segments to reduce the traffic on any single segment, could make a real difference.
I'd suggest checking for little pieces of blotter paper or funny cigarettes, because he's definitely out there with Alice, the Catepillar and the rest of characters from "Behind the Looking Glass".