![]() I was actually plannning to update beta7 but the code that works for beta6 gives the following error when running with beta7. We have declared nd4j-native avx2 and avx512 dependendencies in our pom.xml 1.0.0-beta7 gave an error during prections for imported model, therefore we delayed migration. GC config: Nd4j.getMemoryManager().setAutoGcWindow(10000) ![]() Private void predict(ComputationGraph graph, float input1, float input2) ", Pointer.physicalBytes(), Pointer.formatBytes(Pointer.physicalBytes()), Pointer.availablePhysicalBytes()) policyLearning(LearningPolicy.FIRST_LOOP) // <- this option makes workspace learning after first loop policyAllocation(AllocationPolicy.STRICT) // <- this option disables overallocation behavior ![]() Running code: private final WorkspaceConfiguration learningConfig = WorkspaceConfiguration.builder() I added destroyAllWorkspacesForCurrentThread() when 80% threshold limit reached, however even though this code is called from each thread, memory keeps around 300G (see logging below code) How can I find what part of the code is leaking memory? Or what should I use to prevent memory leaks? As we make predictions, off-heap memory usage keeps increasing and at some point OutOfMemoryError occurs: Physical memory usage is too high: physicalBytes (341G) > maxPhysicalBytes (340G) The code below is called by 40 threads each of which has its own CompuationGraph (no sharing). We are using CPU backend, our CPU supports AVX2 and AVX512 instructions. We are using Deeplearning4j for making predictions where model is trained by Keras and imported to Deeplearning4j.
0 Comments
Leave a Reply. |