I have a C# component attributed with COM+ configuration settings, one of which is ObjectPooling – set to disabled. The object overrides Dispose and ServicedComponent Activate, Deactivate and CanBePooled methods, CanBePooled always returns true. The component is registered in a COM+ server app using RegSvcs. I run up my unmanaged C++ test app and observe the object activation sequence I expect (all methods have trace messages);
- ctor
- Activate
- Method Call (AutoDeactivate)
- Deactivate
- Dispose
- dtor
So far so good, CanBePooled is not called and the object is not shown to be pooled in CSE.
I shutdown the COM+ app and set the components ObjectPooling to enabled using the CSE. I then re-run the test app, and see the same activation sequence – CanBePooled is not called and the object is not added to the pool (as seen in CSE) – this isn’t right!.
I shutdown the COM+ app, and modify the component source code to enable the ObjectPooling attribute, rebuild and re-register with RegSvcs (with /reconfig switch). I run the test app and now see the activation sequence I expect;
- ctor
- Activate
- Method Call (AutoDeactivate)
- Deactivate
- CanBePooled
CanBePooled is called and the object is shown to be pooled in CSE.
This is the weird bit – since the object is using JIT – the COM+ context, TP and SCP remain active independently of the real object, the interception services configured for the COM+ context and therefore involving the SCP / TP / Real Object interactions don’t seem to be working independently of the objects meta-data, as though they are ignoring the services configuration held in the COM+ catalog for the component.
If anyone can shed any light on this I’d appreciate it very much as my head is starting to dent the desk .
Published by