Error/Failure in Test
Issue #28
resolved
Executing make test
resulted in 1 Failure and 1 Error. Below is my result of running make test
immediately after running make clean
, make install
and make init
. Running docker pull atlassianlabs/localstack:latest
says the image is up to date.
~/aws_projects/localstack$ make test
make lint && \
. .venv/bin/activate; DEBUG= PYTHONPATH=`pwd` nosetests --with-coverage --logging-level=WARNING --nocapture --no-skip --exe --cover-erase --cover-tests --cover-inclusive --cover-package=localstack --with-xunit --exclude='.venv' . ##--exclude='.venv.*' .
make[1]: Entering directory '/home/msunardi/aws_projects/localstack'
(. .venv/bin/activate; pep8 --max-line-length=120 --ignore=E128 --exclude=node_modules,legacy,.venv,dist .)
make[1]: Leaving directory '/home/msunardi/aws_projects/localstack'
cmd: ES_JAVA_OPTS="$ES_JAVA_OPTS -Xms200m -Xmx500m" /home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=4587 -E http.publish_port=4587 -E path.data=/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data
Starting local Elasticsearch (port 4571)...
Starting mock ES service (port 4578)...
Starting mock S3 (port 4572)...
Starting mock SNS (port 4575)...
Starting mock SQS (port 4576)...
Starting mock API Gateway (port 4567)...
Starting mock DynamoDB (port 4569)...
Starting mock DynamoDB Streams service (port 4570)...
Starting mock Firehose service (port 4573)...
Starting mock Lambda service (port 4574)...
Starting mock Kinesis (port 4568)...
Starting mock Redshift (port 4577)...
Starting mock Route53 (port 4580)...
Starting mock SES (port 4579)...
Starting mock CloudFormation (port 4581)...
[2017-05-22T10:59:58,572][INFO ][o.e.n.Node ] [] initializing ...
[2017-05-22T10:59:58,762][INFO ][o.e.e.NodeEnvironment ] [svCF05b] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-root)]], net usable_space [2.2gb], net total_space [70.9gb], spins? [possibly], types [ext4]
[2017-05-22T10:59:58,767][INFO ][o.e.e.NodeEnvironment ] [svCF05b] heap size [483.3mb], compressed ordinary object pointers [true]
[2017-05-22T10:59:58,772][INFO ][o.e.n.Node ] node name [svCF05b] derived from node ID [svCF05bGSzajRVdZBm_k5Q]; set [node.name] to override
[2017-05-22T10:59:58,772][INFO ][o.e.n.Node ] version[5.3.0], pid[8366], build[3adb13b/2017-03-23T03:31:50.652Z], OS[Linux/4.4.0-78-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-05-22T10:59:59,948][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [aggs-matrix-stats]
[2017-05-22T10:59:59,951][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [ingest-common]
[2017-05-22T10:59:59,952][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [lang-expression]
[2017-05-22T10:59:59,952][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [lang-groovy]
[2017-05-22T10:59:59,952][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [lang-mustache]
[2017-05-22T10:59:59,952][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [lang-painless]
[2017-05-22T10:59:59,952][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [percolator]
[2017-05-22T10:59:59,952][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [reindex]
[2017-05-22T10:59:59,952][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [transport-netty3]
[2017-05-22T10:59:59,953][INFO ][o.e.p.PluginsService ] [svCF05b] loaded module [transport-netty4]
[2017-05-22T10:59:59,953][INFO ][o.e.p.PluginsService ] [svCF05b] no plugins loaded
[2017-05-22T11:00:03,054][INFO ][o.e.n.Node ] initialized
[2017-05-22T11:00:03,062][INFO ][o.e.n.Node ] [svCF05b] starting ...
[2017-05-22T11:00:03,271][INFO ][o.e.t.TransportService ] [svCF05b] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2017-05-22T11:00:03,283][WARN ][o.e.b.BootstrapChecks ] [svCF05b] initial heap size [209715200] not equal to maximum heap size [524288000]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2017-05-22T11:00:03,284][WARN ][o.e.b.BootstrapChecks ] [svCF05b] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2017-05-22T11:00:06,397][INFO ][o.e.c.s.ClusterService ] [svCF05b] new_master {svCF05b}{svCF05bGSzajRVdZBm_k5Q}{_1dUVJJ2RWuEojwjlvuOuw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-05-22T11:00:06,452][INFO ][o.e.h.n.Netty4HttpServerTransport] [svCF05b] publish_address {127.0.0.1:4587}, bound_addresses {[::1]:4587}, {127.0.0.1:4587}
[2017-05-22T11:00:06,464][INFO ][o.e.n.Node ] [svCF05b] started
[2017-05-22T11:00:06,525][INFO ][o.e.g.GatewayService ] [svCF05b] recovered [0] indices into cluster_state
Ready.
..Thread run method <bound method GenericProxy.run_cmd of <GenericProxy(Thread-69, started daemon 140619200571136)>>({}) failed: Traceback (most recent call last):
File "/home/msunardi/aws_projects/localstack/localstack/utils/common.py", line 49, in run
self.func(self.params)
File "/home/msunardi/aws_projects/localstack/localstack/mock/generic_proxy.py", line 139, in run_cmd
self.httpd.serve_forever()
File "/usr/lib/python2.7/SocketServer.py", line 231, in serve_forever
poll_interval)
File "/usr/lib/python2.7/SocketServer.py", line 150, in _eintr_retry
return func(*args)
File "/usr/lib/python2.7/SocketServer.py", line 456, in fileno
return self.socket.fileno()
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
File "/usr/lib/python2.7/socket.py", line 174, in _dummy
raise error(EBADF, 'Bad file descriptor')
error: [Errno 9] Bad file descriptor
..[2017-05-22T11:00:36,449][WARN ][o.e.c.r.a.DiskThresholdMonitor] [svCF05b] high disk watermark [90%] exceeded on [svCF05bGSzajRVdZBm_k5Q][svCF05b][/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data/nodes/0] free: 2.2gb[3.1%], shards will be relocated away from this node
[2017-05-22T11:00:36,455][INFO ][o.e.c.r.a.DiskThresholdMonitor] [svCF05b] rerouting shards: [high disk watermark exceeded on one or more nodes]
...Creating test streams...
[2017-05-22T11:01:06,465][WARN ][o.e.c.r.a.DiskThresholdMonitor] [svCF05b] high disk watermark [90%] exceeded on [svCF05bGSzajRVdZBm_k5Q][svCF05b][/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data/nodes/0] free: 2.2gb[3.1%], shards will be relocated away from this node
Kinesis consumer initialized.
Putting 10 items to table...
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
[2017-05-22T11:01:36,467][WARN ][o.e.c.r.a.DiskThresholdMonitor] [svCF05b] high disk watermark [90%] exceeded on [svCF05bGSzajRVdZBm_k5Q][svCF05b][/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data/nodes/0] free: 2.2gb[3.1%], shards will be relocated away from this node
[2017-05-22T11:01:36,467][INFO ][o.e.c.r.a.DiskThresholdMonitor] [svCF05b] rerouting shards: [high disk watermark exceeded on one or more nodes]
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ebd9e1cf":/var/task lambci/lambda:python2.7 "handler.handler"
Putting 10 items to stream...
docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.fc19edc2":/var/task lambci/lambda:python2.7 "handler.handler"
Waiting some time before finishing test.
DynamoDB and Kinesis updates retrieved (actual/expected): 20/20
.docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.8041dfb0":/var/task lambci/lambda:python2.7 "lambda_integration.handler"
ERROR: 'docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.8041dfb0":/var/task lambci/lambda:python2.7 "lambda_integration.handler"': START RequestId: 926da723-7a52-41f5-b19a-397fd5851ba2 Version: $LATEST
Unable to import module 'lambda_integration': No module named localstack.utils.aws
END RequestId: 926da723-7a52-41f5-b19a-397fd5851ba2
REPORT RequestId: 926da723-7a52-41f5-b19a-397fd5851ba2 Duration: 122 ms Billed Duration: 200 ms Memory Size: 1536 MB Max Memory Used: 20 MB
{"errorMessage": "Unable to import module 'lambda_integration'"}
Edocker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.ef8c5618":/var/task lambci/lambda:python2.7 "handler.handler"
F[2017-05-22T11:02:06,471][WARN ][o.e.c.r.a.DiskThresholdMonitor] [svCF05b] high disk watermark [90%] exceeded on [svCF05bGSzajRVdZBm_k5Q][svCF05b][/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data/nodes/0] free: 2.2gb[3.1%], shards will be relocated away from this node
.Shutdown
ERROR: 'ps aux 2>&1 | grep '[^\s]*\s*8364\s' | grep -v grep | grep ''':
...
======================================================================
ERROR: tests.integration.test_lambda.test_upload_lambda_from_s3
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/msunardi/aws_projects/localstack/.venv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/msunardi/aws_projects/localstack/tests/integration/test_lambda.py", line 50, in test_upload_lambda_from_s3
result = lambda_client.invoke(FunctionName=lambda_name, Payload=data_before)
File "/home/msunardi/aws_projects/localstack/.venv/local/lib/python2.7/site-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/msunardi/aws_projects/localstack/.venv/local/lib/python2.7/site-packages/botocore/client.py", line 544, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (Exception) when calling the Invoke operation: Error executing Lambda function: Command 'docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.8041dfb0":/var/task lambci/lambda:python2.7 "lambda_integration.handler"' returned non-zero exit status 1 Traceback (most recent call last):
File "/home/msunardi/aws_projects/localstack/localstack/mock/apis/lambda_api.py", line 220, in run_lambda
result = run(cmd, env_vars={'AWS_LAMBDA_EVENT_BODY': event_string})
File "/home/msunardi/aws_projects/localstack/localstack/utils/common.py", line 326, in run
if cache_duration_secs <= 0:
File "/home/msunardi/aws_projects/localstack/localstack/utils/common.py", line 323, in do_run
print("ERROR: '%s': %s" % (cmd, e.output))
CalledProcessError: Command 'docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.8041dfb0":/var/task lambci/lambda:python2.7 "lambda_integration.handler"' returned non-zero exit status 1
-------------------- >> begin captured logging << --------------------
localstack.mock.apis.lambda_api: WARNING: Error executing Lambda function: Command 'docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.8041dfb0":/var/task lambci/lambda:python2.7 "lambda_integration.handler"' returned non-zero exit status 1 Traceback (most recent call last):
File "/home/msunardi/aws_projects/localstack/localstack/mock/apis/lambda_api.py", line 220, in run_lambda
result = run(cmd, env_vars={'AWS_LAMBDA_EVENT_BODY': event_string})
File "/home/msunardi/aws_projects/localstack/localstack/utils/common.py", line 326, in run
if cache_duration_secs <= 0:
File "/home/msunardi/aws_projects/localstack/localstack/utils/common.py", line 323, in do_run
print("ERROR: '%s': %s" % (cmd, e.output))
CalledProcessError: Command 'docker run -e HOSTNAME="172.17.0.1" -e AWS_LAMBDA_EVENT_BODY="$AWS_LAMBDA_EVENT_BODY" -v "/tmp/localstack/zipfile.8041dfb0":/var/task lambci/lambda:python2.7 "lambda_integration.handler"' returned non-zero exit status 1
--------------------- >> end captured logging << ---------------------
======================================================================
FAIL: tests.integration.test_lambda.test_lambda_runtimes
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/msunardi/aws_projects/localstack/.venv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/msunardi/aws_projects/localstack/tests/integration/test_lambda.py", line 67, in test_lambda_runtimes
assert to_str(result_data).strip() == '{}'
AssertionError
Name Stmts Miss Cover Missing
------------------------------------------------------------------------------------
localstack.py 0 0 100%
localstack/config.py 33 4 88% 30, 40, 42, 46
localstack/constants.py 47 1 98% 77
localstack/dashboard.py 0 0 100%
localstack/dashboard/api.py 44 25 43% 19-22, 34-37, 53-56, 70-73, 78, 83, 87-91, 95-96
localstack/dashboard/infra.py 376 94 75% 86, 93, 107-108, 114-115, 131, 140-141, 147-150, 161-165, 169-173, 178-183, 187-192, 198, 217-218, 224-225, 231, 244, 247, 261-267, 294-302, 304-305, 328-329, 347-355, 361-362, 383-384, 389-393, 397-405, 433-435, 473-474, 477-479
localstack/mock.py 0 0 100%
localstack/mock/apis.py 0 0 100%
localstack/mock/apis/dynamodbstreams_api.py 31 4 87% 39, 50-52
localstack/mock/apis/es_api.py 21 3 86% 32-34
localstack/mock/apis/firehose_api.py 123 25 80% 61-63, 70-71, 82, 109, 118, 138-142, 153-168, 180-182
localstack/mock/apis/lambda_api.py 377 143 62% 49, 125-133, 137-140, 152-153, 159-162, 181-182, 196-200, 203-205, 212, 222-226, 229-230, 234-235, 237-238, 243-266, 290, 299, 309-310, 315, 333-335, 340-341, 347-361, 364-368, 371, 415-416, 440-445, 472-487, 499-501, 528-535, 550, 553-554, 557, 560, 574, 608-615, 624-628, 639-641
localstack/mock/generic_proxy.py 115 13 89% 50-51, 58-60, 75, 94, 97-100, 103, 120
localstack/mock/infra.py 284 45 84% 47-49, 61-63, 81, 97, 172, 176-177, 191, 198, 212, 227, 233, 249-250, 266-268, 270, 280-282, 284, 294-296, 298, 311, 313, 334, 337-338, 412-413, 415-416, 419, 424-430
localstack/mock/install.py 51 19 63% 25-33, 39-40, 46-47, 64, 69-73
localstack/mock/proxy.py 0 0 100%
localstack/mock/proxy/apigateway_listener.py 50 15 70% 16-19, 32-35, 48-50, 60-65
localstack/mock/proxy/cloudformation_listener.py 80 46 43% 23-35, 39-48, 52-57, 61-67, 71-81, 85-107, 119, 121, 123
localstack/mock/proxy/dynamodb_listener.py 73 8 89% 26, 49-51, 61-62, 88-89
localstack/mock/proxy/kinesis_listener.py 38 5 87% 18-26
localstack/mock/proxy/s3_listener.py 139 57 59% 85-86, 89-93, 96-100, 103, 116-126, 131-136, 141-144, 151-159, 170-179, 193, 215-220, 223
localstack/mock/proxy/sns_listener.py 35 28 20% 14-49
localstack/utils.py 0 0 100%
localstack/utils/aws.py 0 0 100%
localstack/utils/aws/aws_models.py 207 81 61% 8, 18, 21, 24, 49-52, 59-61, 64-76, 86-88, 90, 108-110, 113, 116, 119, 125-134, 138-144, 166, 187-189, 192-194, 199, 204-206, 209, 231-233, 238, 243, 245, 250, 252, 260-269, 274-278
localstack/utils/aws/aws_stack.py 284 49 83% 46, 57, 65, 71, 96, 98, 103, 105, 119, 121, 175-177, 199, 251-252, 256-257, 261-263, 302, 316, 318, 333, 396-403, 416, 419-420, 422-432, 462-463, 467
localstack/utils/cloudformation.py 0 0 100%
localstack/utils/cloudformation/template_deployer.py 43 3 93% 57, 65-66
localstack/utils/common.py 294 102 65% 57-58, 70-72, 76, 82, 85, 89-94, 96, 100, 102-103, 106, 110, 113-115, 126-127, 133, 135, 141, 154, 160, 162, 167, 171-185, 189, 193-198, 202, 207, 211, 215, 221-224, 227-235, 240, 245, 252-255, 257, 263-266, 269-272, 277, 284-285, 294-296, 300, 306, 310, 318, 324, 330, 334-338, 345-353, 357, 360, 365-367, 372, 381, 383, 386-390, 402
localstack/utils/compat.py 5 1 80% 11
localstack/utils/kinesis.py 0 0 100%
localstack/utils/kinesis/kclipy_helper.py 37 0 100%
localstack/utils/kinesis/kinesis_connector.py 263 65 75% 57-63, 66-68, 71-83, 86-90, 93-99, 102, 106-108, 112-113, 123-124, 150, 173, 180-181, 186, 205, 213, 239, 261-266, 279-284, 332, 413, 428-429, 433-435
localstack/utils/kinesis/kinesis_util.py 48 7 85% 31-33, 52-54, 59
localstack/utils/persistence.py 74 42 43% 27-31, 39-56, 60-67, 74-85, 101-106
localstack/utils/testutil.py 128 11 91% 86-87, 134, 137, 144-146, 151, 155-156, 159
------------------------------------------------------------------------------------
TOTAL 3300 896 73%
----------------------------------------------------------------------
Ran 14 tests in 135.309s
FAILED (errors=1, failures=1)
Makefile:81: recipe for target 'test' failed
make: *** [test] Error 1
Comments (3)
-
Account Deactivated -
reporter Thank you for the response, @w_hummer. Here is the result of the docker commands:
$ docker --version Docker version 17.05.0-ce, build 89658be $ docker images | grep lambci lambci/lambda python2.7 5fe854406876 2 weeks ago 1.19GB
The tests completed successfully with the local execution of Lambdas
$ LAMBDA_EXECUTOR=local make test make lint && \ . .venv/bin/activate; DEBUG= PYTHONPATH=`pwd` nosetests --with-coverage --logging-level=WARNING --nocapture --no-skip --exe --cover-erase --cover-tests --cover-inclusive --cover-package=localstack --with-xunit --exclude='.venv' . ##--exclude='.venv.*' . make[1]: Entering directory '/home/msunardi/aws_projects/localstack' (. .venv/bin/activate; pep8 --max-line-length=120 --ignore=E128 --exclude=node_modules,legacy,.venv,dist .) make[1]: Leaving directory '/home/msunardi/aws_projects/localstack' cmd: ES_JAVA_OPTS="$ES_JAVA_OPTS -Xms200m -Xmx500m" /home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=4587 -E http.publish_port=4587 -E path.data=/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data Starting local Elasticsearch (port 4571)... Starting mock ES service (port 4578)... Starting mock S3 (port 4572)... Starting mock SNS (port 4575)... Starting mock SQS (port 4576)... Starting mock API Gateway (port 4567)... Starting mock DynamoDB (port 4569)... Starting mock DynamoDB Streams service (port 4570)... Starting mock Firehose service (port 4573)... Starting mock Lambda service (port 4574)... Starting mock Kinesis (port 4568)... Starting mock Redshift (port 4577)... Starting mock Route53 (port 4580)... Starting mock SES (port 4579)... Starting mock CloudFormation (port 4581)... [2017-05-24T11:48:18,970][INFO ][o.e.n.Node ] [] initializing ... [2017-05-24T11:48:19,122][INFO ][o.e.e.NodeEnvironment ] [s22EhH5] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-root)]], net usable_space [2.7gb], net total_space [70.9gb], spins? [possibly], types [ext4] [2017-05-24T11:48:19,123][INFO ][o.e.e.NodeEnvironment ] [s22EhH5] heap size [483.3mb], compressed ordinary object pointers [true] [2017-05-24T11:48:19,132][INFO ][o.e.n.Node ] node name [s22EhH5] derived from node ID [s22EhH53QEaSmn6_Tkoa1g]; set [node.name] to override [2017-05-24T11:48:19,134][INFO ][o.e.n.Node ] version[5.3.0], pid[11647], build[3adb13b/2017-03-23T03:31:50.652Z], OS[Linux/4.4.0-78-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b11] [2017-05-24T11:48:20,183][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [aggs-matrix-stats] [2017-05-24T11:48:20,183][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [ingest-common] [2017-05-24T11:48:20,184][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [lang-expression] [2017-05-24T11:48:20,184][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [lang-groovy] [2017-05-24T11:48:20,184][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [lang-mustache] [2017-05-24T11:48:20,184][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [lang-painless] [2017-05-24T11:48:20,184][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [percolator] [2017-05-24T11:48:20,185][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [reindex] [2017-05-24T11:48:20,185][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [transport-netty3] [2017-05-24T11:48:20,185][INFO ][o.e.p.PluginsService ] [s22EhH5] loaded module [transport-netty4] [2017-05-24T11:48:20,186][INFO ][o.e.p.PluginsService ] [s22EhH5] no plugins loaded [2017-05-24T11:48:22,792][INFO ][o.e.n.Node ] initialized [2017-05-24T11:48:22,796][INFO ][o.e.n.Node ] [s22EhH5] starting ... [2017-05-24T11:48:23,043][INFO ][o.e.t.TransportService ] [s22EhH5] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2017-05-24T11:48:23,052][WARN ][o.e.b.BootstrapChecks ] [s22EhH5] initial heap size [209715200] not equal to maximum heap size [524288000]; this can cause resize pauses and prevents mlockall from locking the entire heap [2017-05-24T11:48:23,053][WARN ][o.e.b.BootstrapChecks ] [s22EhH5] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] [2017-05-24T11:48:26,129][INFO ][o.e.c.s.ClusterService ] [s22EhH5] new_master {s22EhH5}{s22EhH53QEaSmn6_Tkoa1g}{8wGUllIgR0ejsnlUt2T_-A}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined) [2017-05-24T11:48:26,171][INFO ][o.e.h.n.Netty4HttpServerTransport] [s22EhH5] publish_address {127.0.0.1:4587}, bound_addresses {[::1]:4587}, {127.0.0.1:4587} [2017-05-24T11:48:26,175][INFO ][o.e.n.Node ] [s22EhH5] started [2017-05-24T11:48:26,221][INFO ][o.e.g.GatewayService ] [s22EhH5] recovered [0] indices into cluster_state Ready. .Thread run method <bound method GenericProxy.run_cmd of <GenericProxy(Thread-69, started daemon 140576125073152)>>({}) failed: Traceback (most recent call last): File "/home/msunardi/aws_projects/localstack/localstack/utils/common.py", line 49, in run self.func(self.params) File "/home/msunardi/aws_projects/localstack/localstack/mock/generic_proxy.py", line 137, in run_cmd self.httpd = ThreadedHTTPServer(("", self.port), GenericProxyHandler) File "/usr/lib/python2.7/SocketServer.py", line 417, in __init__ self.server_bind() File "/usr/lib/python2.7/BaseHTTPServer.py", line 108, in server_bind SocketServer.TCPServer.server_bind(self) File "/usr/lib/python2.7/SocketServer.py", line 431, in server_bind self.socket.bind(self.server_address) File "/usr/lib/python2.7/socket.py", line 228, in meth return getattr(self._sock,name)(*args) error: [Errno 98] Address already in use ...[2017-05-24T11:48:56,161][WARN ][o.e.c.r.a.DiskThresholdMonitor] [s22EhH5] high disk watermark [90%] exceeded on [s22EhH53QEaSmn6_Tkoa1g][s22EhH5][/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data/nodes/0] free: 2.7gb[3.9%], shards will be relocated away from this node [2017-05-24T11:48:56,164][INFO ][o.e.c.r.a.DiskThresholdMonitor] [s22EhH5] rerouting shards: [high disk watermark exceeded on one or more nodes] ...Creating test streams... [2017-05-24T11:49:26,176][WARN ][o.e.c.r.a.DiskThresholdMonitor] [s22EhH5] high disk watermark [90%] exceeded on [s22EhH53QEaSmn6_Tkoa1g][s22EhH5][/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data/nodes/0] free: 2.7gb[3.9%], shards will be relocated away from this node Kinesis consumer initialized. Putting 10 items to table... Putting 10 items to stream... Waiting some time before finishing test. DynamoDB and Kinesis updates retrieved (actual/expected): 20/20 ...[2017-05-24T11:49:56,179][WARN ][o.e.c.r.a.DiskThresholdMonitor] [s22EhH5] high disk watermark [90%] exceeded on [s22EhH53QEaSmn6_Tkoa1g][s22EhH5][/home/msunardi/aws_projects/localstack/localstack/infra/elasticsearch/data/nodes/0] free: 2.7gb[3.9%], shards will be relocated away from this node [2017-05-24T11:49:56,179][INFO ][o.e.c.r.a.DiskThresholdMonitor] [s22EhH5] rerouting shards: [high disk watermark exceeded on one or more nodes] .Shutdown ... Name Stmts Miss Cover Missing ------------------------------------------------------------------------------------ localstack.py 0 0 100% localstack/config.py 33 4 88% 30, 40, 42, 46 localstack/constants.py 47 1 98% 77 localstack/dashboard.py 0 0 100% localstack/dashboard/api.py 44 25 43% 19-22, 34-37, 53-56, 70-73, 78, 83, 87-91, 95-96 localstack/dashboard/infra.py 376 94 75% 86, 93, 107-108, 114-115, 131, 140-141, 147-150, 161-165, 169-173, 178-183, 187-192, 198, 217-218, 224-225, 231, 244, 247, 261-267, 294-302, 304-305, 328-329, 347-355, 361-362, 383-384, 389-393, 397-405, 433-435, 473-474, 477-479 localstack/mock.py 0 0 100% localstack/mock/apis.py 0 0 100% localstack/mock/apis/dynamodbstreams_api.py 31 4 87% 39, 50-52 localstack/mock/apis/es_api.py 21 3 86% 32-34 localstack/mock/apis/firehose_api.py 123 25 80% 61-63, 70-71, 82, 109, 118, 138-142, 153-168, 180-182 localstack/mock/apis/lambda_api.py 377 110 71% 49, 125-133, 137-140, 148-153, 159-162, 181-182, 196-200, 211-220, 226-231, 234-235, 258-260, 290, 299, 309-310, 315, 340-341, 367-368, 371, 415-416, 440-445, 472-487, 499-501, 528-535, 550, 553-554, 560, 574, 608-615, 624-628, 639-641 localstack/mock/generic_proxy.py 115 13 89% 50-51, 58-60, 75, 94, 97-100, 103, 120 localstack/mock/infra.py 284 45 84% 47-49, 61-63, 81, 97, 172, 176-177, 191, 198, 212, 227, 233, 249-250, 266-268, 270, 280-282, 284, 294-296, 298, 311, 313, 334, 337-338, 412-413, 415-416, 419, 424-430 localstack/mock/install.py 51 19 63% 25-33, 39-40, 46-47, 64, 69-73 localstack/mock/proxy.py 0 0 100% localstack/mock/proxy/apigateway_listener.py 50 15 70% 16-19, 32-35, 48-50, 60-65 localstack/mock/proxy/cloudformation_listener.py 80 46 43% 23-35, 39-48, 52-57, 61-67, 71-81, 85-107, 119, 121, 123 localstack/mock/proxy/dynamodb_listener.py 73 8 89% 26, 49-51, 61-62, 88-89 localstack/mock/proxy/kinesis_listener.py 38 5 87% 18-26 localstack/mock/proxy/s3_listener.py 139 57 59% 85-86, 89-93, 96-100, 103, 116-126, 131-136, 141-144, 151-159, 170-179, 193, 215-220, 223 localstack/mock/proxy/sns_listener.py 35 28 20% 14-49 localstack/utils.py 0 0 100% localstack/utils/aws.py 0 0 100% localstack/utils/aws/aws_models.py 207 81 61% 8, 18, 21, 24, 49-52, 59-61, 64-76, 86-88, 90, 108-110, 113, 116, 119, 125-134, 138-144, 166, 187-189, 192-194, 199, 204-206, 209, 231-233, 238, 243, 245, 250, 252, 260-269, 274-278 localstack/utils/aws/aws_stack.py 284 49 83% 46, 57, 65, 71, 96, 98, 103, 105, 119, 121, 175-177, 199, 251-252, 256-257, 261-263, 302, 316, 318, 333, 396-403, 416, 419-420, 422-432, 462-463, 467 localstack/utils/cloudformation.py 0 0 100% localstack/utils/cloudformation/template_deployer.py 43 3 93% 57, 65-66 localstack/utils/common.py 293 55 81% 57, 85, 90-94, 97, 100, 102, 106, 113-114, 124-126, 133, 160, 167, 171-174, 178-180, 184, 196-197, 207, 215, 227, 232-234, 254-255, 269-271, 284, 295, 306, 320-323, 334-337, 349-352, 381, 389 localstack/utils/compat.py 5 1 80% 11 localstack/utils/kinesis.py 0 0 100% localstack/utils/kinesis/kclipy_helper.py 37 0 100% localstack/utils/kinesis/kinesis_connector.py 263 65 75% 57-63, 66-68, 71-83, 86-90, 93-99, 102, 106-108, 112-113, 123-124, 150, 173, 180-181, 186, 205, 213, 239, 261-266, 279-284, 332, 413, 428-429, 433-435 localstack/utils/kinesis/kinesis_util.py 48 7 85% 31-33, 52-54, 59 localstack/utils/persistence.py 74 42 43% 27-31, 39-56, 60-67, 74-85, 101-106 localstack/utils/testutil.py 128 11 91% 86-87, 134, 137, 144-146, 151, 155-156, 159 ------------------------------------------------------------------------------------ TOTAL 3299 816 75% ---------------------------------------------------------------------- Ran 14 tests in 104.591s OK
Although I'm not sure what the 'Address already in use' is referring to.
Thanks!
-
Account Deactivated - changed status to resolved
Thanks for testing/confirming.
- Log in to comment
Thanks for reporting. The following error line looks like there is something wrong with running the Lambda code in Docker containers:
Can you please post the output of the following commands:
There is a configuration parameter to use local execution of Lambdas instead of execution in Docker containers. Can you try running this:
Thanks