JNI Error Neo4J 3.2.2

Issue #34 resolved
Murray Bryant created an issue

Hi VInce

I thought I should make sure my dev machine was on the latest version.

I upgraded to Neo4J 3.2.2 ( javac 1.8.0_111) , and did a make.

I now get the following error when I try and run test

Any ideas?

Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
    at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:314)
    at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:268)
    at java.util.jar.JarVerifier.processEntry(JarVerifier.java:316)
    at java.util.jar.JarVerifier.update(JarVerifier.java:228)
    at java.util.jar.JarFile.initializeVerifier(JarFile.java:383)
    at java.util.jar.JarFile.getInputStream(JarFile.java:450)
    at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:940)
    at sun.misc.Resource.cachedInputStream(Resource.java:77)
    at sun.misc.Resource.getByteBuffer(Resource.java:160)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:454)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)

Comments (13)

  1. Vince Bickers repo owner

    Hi Murray,

    Couple of thoughts. First, if you upgrade Neo4j you'll need to specify the version of Neo4j to use in the make command,

    e.g. make 3.2.2

    I don't know if you did this, if not, please try and let me know if this resolves the problem.

    Second thought: are you using test -s by any chance - e.g the streaming endpoint? The test option isn't intended to connect to Neo4j at all, but I've just checked that if you specify -s it will try to do so, which is a bug. So you should omit the -s flag for the time being. A fix for this will be along shortly.

  2. Murray Bryant reporter

    Hi Vince

    I did some more testing, here are my steps

    1. Remove old directory
    2. Git Clone new copy of source
    3. ./make 3.2.2
    4. bin/databridge
    5. init geo2 ( worked fine)
    6. test geo2

    I got the same error as above.

    Any ideas what else to check?

  3. Vince Bickers repo owner

    Unfortunately the stack trace doesn't indicate which jar file is causing the problem in the build, but I've pushed a small change to the pom which excludes all signature files - hopefully this will fix this issue.

    Please pull and try again

  4. Murray Bryant reporter

    Thanks Vince

    That has fixed that problem.

    My issue with importing a small number of nodes is now fixed.

    I am still getting a crash when trying lots of nodes

    2017-08-02 21:32:57,480  INFO .databridge.notification.ProgressMonitor:  87 - Total: rows: 460,000, nodes: 1,119,349, edges: 898,158, rate: 152,402 objects/sec, 34,748 rows/sec
    Exception in thread "Thread-5" java.lang.IllegalStateException: Attempt to allocate #65 extra segment tier, 64 is maximum.
    Possible reasons include:
     - you have forgotten to configure (or configured wrong) builder.entries() number
     - same regarding other sizing Chronicle Hash configurations, most likely maxBloatFactor(), averageKeySize(), or averageValueSize()
     - keys, inserted into the ChronicleHash, are distributed suspiciously bad. This might be a DOS attack
        at net.openhft.chronicle.hash.impl.VanillaChronicleHash.allocateTier(VanillaChronicleHash.java:767)
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.nextTier(CompiledMapQueryContext.java:2858)
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.alloc(CompiledMapQueryContext.java:3022)
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.initEntryAndKey(CompiledMapQueryContext.java:3436)
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.putEntry(CompiledMapQueryContext.java:3891)
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.doInsert(CompiledMapQueryContext.java:4080)
        at net.openhft.chronicle.map.MapEntryOperations.insert(MapEntryOperations.java:157)
        at net.openhft.chronicle.map.impl.CompiledMapQueryContext.insert(CompiledMapQueryContext.java:4051)
        at net.openhft.chronicle.map.MapMethods.putIfAbsent(MapMethods.java:121)
        at net.openhft.chronicle.map.VanillaChronicleMap.putIfAbsent(VanillaChronicleMap.java:517)
        at net.openhft.chronicle.set.SetFromMap.add(SetFromMap.java:74)
        at com.graphaware.neo4j.databridge.catalogs.edge.EdgeCatalog.add(EdgeCatalog.java:83)
        at com.graphaware.neo4j.databridge.DatabridgeWorker.uniqueEdge(DatabridgeWorker.java:204)
        at com.graphaware.neo4j.databridge.DatabridgeWorker.createEdge(DatabridgeWorker.java:194)
        at com.graphaware.neo4j.databridge.DatabridgeWorker.createEdges(DatabridgeWorker.java:182)
        at com.graphaware.neo4j.databridge.DatabridgeWorker.createGraphObjects(DatabridgeWorker.java:76)
        at com.graphaware.neo4j.databridge.DatabridgeWorker.run(DatabridgeWorker.java:88)
        at java.lang.Thread.run(Thread.java:745)
    
  5. Vince Bickers repo owner

    Does test work or is it just import that is failing?

    If it's just import failing, please try with import -d to clear any previous mappings.

    if that still doesn't fix the issue, please run profile on the dataset in question and post the results here.

    Thanks!

    Vince

  6. Murray Bryant reporter

    Hi Vince

    It is running test on an import I made.

    Here is the profile output

    neo4j-databridge-shell$ profile geo1
    2017-08-03 09:28:50,967  INFO .databridge.notification.ProgressMonitor:  87 - Profiling tbldhcoll.json
    2017-08-03 09:28:52,187  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 100,000, rate: 81,900 rows/sec
    2017-08-03 09:28:52,884  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 200,000, rate: 104,329 rows/sec
    2017-08-03 09:28:53,598  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 300,000, rate: 113,981 rows/sec
    2017-08-03 09:28:54,410  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 400,000, rate: 116,144 rows/sec
    2017-08-03 09:28:54,639  INFO .databridge.notification.ProgressMonitor:  87 - Profiling tbldhsurv.json
    2017-08-03 09:28:54,935  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 500,000, rate: 125,976 rows/sec
    2017-08-03 09:28:55,333  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 600,000, rate: 137,394 rows/sec
    2017-08-03 09:28:55,711  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 700,000, rate: 147,523 rows/sec
    2017-08-03 09:28:56,074  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 800,000, rate: 156,617 rows/sec
    2017-08-03 09:28:56,435  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 900,000, rate: 164,563 rows/sec
    2017-08-03 09:28:56,775  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 1,000,000, rate: 172,146 rows/sec
    2017-08-03 09:28:57,144  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 1,100,000, rate: 178,051 rows/sec
    2017-08-03 09:28:57,489  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 1,200,000, rate: 183,964 rows/sec
    2017-08-03 09:28:57,845  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 1,300,000, rate: 188,980 rows/sec
    2017-08-03 09:28:58,182  INFO .databridge.notification.ProgressMonitor:  87 - Rows profiled: 1,400,000, rate: 194,013 rows/sec
    2017-08-03 09:28:58,328  INFO .databridge.notification.ProgressMonitor:  87 - Profiling finished. Rows processed: 1441558, time: 0 hrs, 0 mins, 7 secs, rate: 194332 rows/sec
    {
            nodes: [
                    { 'Coordinate': {'max':877096, 'akl':22.94} }
                    { 'Dataset': {'max':438548, 'akl':8.91} }
                    { 'DownholePoint': {'max':1003010, 'akl':20.59} }
                    { 'DrillHole': {'max':1441558, 'akl':16.95} }
                    { 'DownholeSurvey': {'max':1003010, 'akl':20.59} }
            ],
            edges: [
                    { '{LOCATED_AT}': {'max':438548} }
                    { '{CONTAINS}': {'max':1003010} }
                    { '{WITHIN}': {'max':438548} }
                    { '{POSITION_OF}': {'max':1003010} }
            ]
    }neo4j-databridge-shell$
    
  7. Murray Bryant reporter

    I can replicate this error on a small data import

    Profile:

    profile hallscreek
    2017-08-03 10:02:04,164  INFO .databridge.notification.ProgressMonitor:  87 - Profiling company.json
    2017-08-03 10:02:05,154  INFO .databridge.notification.ProgressMonitor:  87 - Profiling person.json
    2017-08-03 10:02:05,334  INFO .databridge.notification.ProgressMonitor:  87 - Profiling location.json
    2017-08-03 10:02:05,396  INFO .databridge.notification.ProgressMonitor:  87 - Profiling lithologycode.json
    2017-08-03 10:02:05,490  INFO .databridge.notification.ProgressMonitor:  87 - Profiling finished. Rows processed: 531, time: 0 hrs, 0 mins, 1 secs, rate: 382 rows/sec
    {
            nodes: [
                    { 'LoggingCodeSchema': {'max':429, 'akl':36.0} }
                    { 'Project': {'max':8, 'akl':36.0} }
                    { 'Company': {'max':47, 'akl':36.0} }
                    { 'Lease': {'max':8, 'akl':36.0} }
                    { 'Lithology': {'max':429, 'akl':36.0} }
                    { 'CompanyType': {'max':39, 'akl':8.59} }
                    { 'Person': {'max':55, 'akl':36.0} }
                    { 'LocationType': {'max':8, 'akl':36.0} }
                    { 'Location': {'max':8, 'akl':36.0} }
            ],
            edges: [
                    { '{GROUPED_BY}': {'max':39} }
                    { '{SUPERVISED_BY}': {'max':8} }
                    { '{WITHIN}': {'max':16} }
                    { '{DESCRIBED_BY}': {'max':437} }
                    { '{DRILLED_BY}': {'max':8} }
            ]
    }
    

    Error

    neo4j-databridge-shell$ import -d hallscreek
    2017-08-03 10:02:39,288  INFO .databridge.notification.ProgressMonitor:  87 - Starting profiling phase.
    2017-08-03 10:02:39,351  INFO .databridge.notification.ProgressMonitor:  87 - Profiling company.json
    2017-08-03 10:02:39,739  INFO .databridge.notification.ProgressMonitor:  87 - Profiling person.json
    2017-08-03 10:02:39,795  INFO .databridge.notification.ProgressMonitor:  87 - Profiling location.json
    2017-08-03 10:02:39,850  INFO .databridge.notification.ProgressMonitor:  87 - Profiling lithologycode.json
    2017-08-03 10:02:39,934  INFO .databridge.notification.ProgressMonitor:  87 - Profiling finished. Rows processed: 531, time: 0 hrs, 0 mins, 0 secs, rate: 828 rows/sec
    2017-08-03 10:02:39,934  INFO .databridge.notification.ProgressMonitor:  87 - Profiling phase complete. Time: 0 hrs, 0 mins, 0 secs
    2017-08-03 10:02:39,934  INFO .databridge.notification.ProgressMonitor:  87 - Starting loading phase
    2017-08-03 10:02:39,935  INFO .databridge.notification.ProgressMonitor:  87 - Initialising Offline Endpoint
    2017-08-03 10:02:40,296  INFO .databridge.notification.ProgressMonitor:  87 - Importing company.json
    2017-08-03 10:02:42,101  INFO .databridge.notification.ProgressMonitor:  87 - Total: rows: 39, nodes: 44, edges: 39, rate: 46 objects/sec, 21 rows/sec
    2017-08-03 10:02:42,101  INFO .databridge.notification.ProgressMonitor:  87 - Importing person.json
    2017-08-03 10:02:42,170  INFO .databridge.notification.ProgressMonitor:  87 - Total: rows: 94, nodes: 99, edges: 39, rate: 73 objects/sec, 50 rows/sec
    2017-08-03 10:02:42,170  INFO .databridge.notification.ProgressMonitor:  87 - Importing location.json
    2017-08-03 10:02:42,258  INFO .databridge.notification.ProgressMonitor:  87 - Total: rows: 102, nodes: 110, edges: 71, rate: 92 objects/sec, 51 rows/sec
    2017-08-03 10:02:42,259  INFO .databridge.notification.ProgressMonitor:  87 - Importing lithologycode.json
    Exception in thread "Thread-9" java.lang.IllegalStateException: Attempt to allocate #2 extra segment tier, 1 is maximum.
    Possible reasons include:
     - you have forgotten to configure (or configured wrong) builder.entries() number
     - same regarding other sizing Chronicle Hash configurations, most likely maxBloatFactor(), averageKeySize(), or averageValueSize()
     - keys, inserted into the ChronicleHash, are distributed suspiciously bad. This might be a DOS attack
            at net.openhft.chronicle.hash.impl.VanillaChronicleHash.allocateTier(VanillaChronicleHash.java:767)
            at net.openhft.chronicle.map.impl.CompiledMapQueryContext.nextTier(CompiledMapQueryContext.java:2858)
            at net.openhft.chronicle.map.impl.CompiledMapQueryContext.alloc(CompiledMapQueryContext.java:3022)
            at net.openhft.chronicle.map.impl.CompiledMapQueryContext.initEntryAndKey(CompiledMapQueryContext.java:3436)
            at net.openhft.chronicle.map.impl.CompiledMapQueryContext.putEntry(CompiledMapQueryContext.java:3891)
            at net.openhft.chronicle.map.impl.CompiledMapQueryContext.doInsert(CompiledMapQueryContext.java:4080)
            at net.openhft.chronicle.map.MapEntryOperations.insert(MapEntryOperations.java:157)
            at net.openhft.chronicle.map.impl.CompiledMapQueryContext.insert(CompiledMapQueryContext.java:4051)
            at net.openhft.chronicle.map.MapMethods.putIfAbsent(MapMethods.java:121)
            at net.openhft.chronicle.map.VanillaChronicleMap.putIfAbsent(VanillaChronicleMap.java:517)
            at net.openhft.chronicle.set.SetFromMap.add(SetFromMap.java:74)
            at com.graphaware.neo4j.databridge.catalogs.edge.EdgeCatalog.add(EdgeCatalog.java:83)
            at com.graphaware.neo4j.databridge.DatabridgeWorker.uniqueEdge(DatabridgeWorker.java:204)
            at com.graphaware.neo4j.databridge.DatabridgeWorker.createEdge(DatabridgeWorker.java:194)
            at com.graphaware.neo4j.databridge.DatabridgeWorker.createEdges(DatabridgeWorker.java:182)
            at com.graphaware.neo4j.databridge.DatabridgeWorker.createGraphObjects(DatabridgeWorker.java:76)
            at com.graphaware.neo4j.databridge.DatabridgeWorker.run(DatabridgeWorker.java:88)
            at java.lang.Thread.run(Thread.java:745)
    2017-08-03 10:02:42,324  INFO .databridge.notification.ProgressMonitor:  87 - Total: rows: 531, nodes: 119, edges: 78, rate: 97 objects/sec, 261 rows/sec
    2017-08-03 10:02:42,325  INFO .databridge.notification.ProgressMonitor:  87 - Loading phase complete. Time: 0 hrs, 0 mins, 2 secs
    2017-08-03 10:02:42,325  INFO .databridge.notification.ProgressMonitor:  87 - Stopping importer
    2017-08-03 10:02:42,331  INFO .databridge.notification.ProgressMonitor:  87 - Nodes created: 119
    2017-08-03 10:02:42,331  INFO .databridge.notification.ProgressMonitor:  87 - Nodes updated: 0
    2017-08-03 10:02:42,331  INFO .databridge.notification.ProgressMonitor:  87 - Edges created: 78
    2017-08-03 10:02:42,331  INFO .databridge.notification.ProgressMonitor:  87 - Starting index creation phase
    2017-08-03 10:02:42,331  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :Lease(identity)
    2017-08-03 10:02:42,346  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :Lithology(identity)
    2017-08-03 10:02:42,346  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :CompanyType(identity)
    2017-08-03 10:02:42,347  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :Person(identity)
    2017-08-03 10:02:42,347  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :LocationType(identity)
    2017-08-03 10:02:42,347  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :Project(identity)
    2017-08-03 10:02:42,347  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :Company(identity)
    2017-08-03 10:02:42,347  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :LoggingCodeSchema(identity)
    2017-08-03 10:02:42,348  INFO .databridge.notification.ProgressMonitor:  87 - Requested: create index :Location(identity)
    2017-08-03 10:02:42,939  INFO .databridge.notification.ProgressMonitor:  87 - Index creation phase finished. Time: 0 hrs, 0 mins, 0 secs
    2017-08-03 10:02:42,939  INFO .databridge.notification.ProgressMonitor:  87 - Import finished. Time: 0 hrs, 0 mins, 3 secs
    neo4j-databridge-shell$
    
  8. Murray Bryant reporter

    Here is a screenshot of the data that is causing the error. It imported 8 rows before stopping

    Screen Shot 2017-08-03 at 10.05.08 am.png

    and here is the configuration

    {
     "resource":"lithologycode.json",
     "nodes":[
            {
                "type":"LoggingCodeSchema",
                "identity":["l_id"],
                "update_strategy":"unique",
                "properties":[
                    {"name":"name","column":"loggingcode"},
                   {"name":"description","column":"ldesc"},
                    { "name": "import_time", "value": "now()" },
                    { "name": "guid", "value": "l_id" }
                ]
            },
            {
                "type":"Lithology",
                "identity":["id"],
                "update_strategy":"unique",
                "properties":[
                    {"name":"name","column":"code"},
                    {"name":"description","column":"description"},
                    { "name": "guid", "value": "id" },
                    { "name": "import_time", "value": "now()" }
                ]
            }
     ],
    
    
        "edges":[
    
            {"name":"{DESCRIBED_BY}","source":"Lithology","target":"LoggingCodeSchema"}
    
        ],
      "variables": []
    }
    
  9. Murray Bryant reporter

    Hi Vince

    When I changed the configuration from a dynamic edge to an edge the error went away.

    i.e. From "name":"{DESCRIBED_BY}" to "name":"DESCRIBED_BY"

    I don't understand what is happening here, are you able to explain it for me

    thanks

  10. Vince Bickers repo owner

    Thanks for this, Murray, I have a pretty good idea what the problem is now.

    A dynamic edge "{DESCRIBED_BY}" should use the value of the field "DESCRIBED_BY" on each row as the type of the edge created in the database.

    (I couldn't see such a field name in the sample data, but I assume it is there)

    In any case the profiler attempts to determine the maximum number of edges of each type, and allocates space accordingly in a memory-mapped file. If it calculates this value as too small, the allocated space will be insufficient and the error above will occur. I am pretty sure that's what is happening here, so at first glance, this looks like a bug. I'll look into it today, and get back to you ASAP.

    I'll close this JNI issue, and let's continue the discussion on this problem in the original ticket #32

  11. Murray Bryant reporter

    Thanks, I now understand what the dynamic edge does.

    I was completely misusing it. As I did not have a field name with an edge.

    Maybe missing field name could be another config validation check

  12. Log in to comment