[enhancement] loadFlatMap
yaml object → flatMap
// yaml
a:
b: 1
c: 2
d:
e: 5
// map
"a.b" to 1
"a.c" to 2
"a.d.e" to 5
Comments (9)
-
-
reporter - changed title to [enhancement] loadFlatMap
-
reporter @Andrey Somov I created PR! https://bitbucket.org/snakeyaml/snakeyaml/pull-requests/18/542-feat-loadflatmap
-
@김승갑 I can imaging why you want this kind of map, but I do not think it belongs to this library as a core function.
You can do exactly the same outside of snakeyalm. Apply yourflatMap
toMap<String, Object>
loaded using snakeyaml.Just like you do in pull request #18.
-
reporter @Alexander Maslov
right. but, yaml → flatMap is useful in configuration (spring, apache, ..)
if you don’t need to this pr.
close this~
-
reporter spark: sql: crossJoin.enabled: true hive.convertMetastoreOrc: false sources.partitionOverwriteMode: dynamic broadcastTimeout: 36000 autoBroadcastJoinThreshold: -1 legacy: parquet: int96RebaseModeInRead: CORRECTED int96RebaseModeInWrite: CORRECTED datetimeRebaseModeInRead: CORRECTED datetimeRebaseModeInWrite: CORRECTED timeParserPolicy: LEGACY storeAssignmentPolicy: LEGACY hive: exec: dynamic.partition: true dynamic.partition.mode: nonstrict metastore: client: factory: class: com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory
-
reporter https://spark.apache.org/docs/latest/configuration.html#application-properties
spark.app.name
(none) The name of your application. This will appear in the UI and in log data. 0.9.0 spark.driver.cores
1 Number of cores to use for the driver process, only in cluster mode. 1.3.0 spark.driver.maxResultSize
-
- changed status to wontfix
This seems to be an interesting use case, but not for the general yaml parsing library.
You can post/pre porcess maps you are dumping and loading to achieve what you need.
-
reporter I agree, thanks for comment.
- Log in to comment
The words help to explain yourself.