mirror of https://gitlab.com/curben/blog
fix(highlight.js): conf lang/alias does not exist
This commit is contained in:
parent
bdc4a74c79
commit
b79f818ac5
|
@ -9,7 +9,7 @@ tags:
|
||||||
|
|
||||||
If you have configured dnf-automatic to only apply security updates on CentOS Stream, it will not install any updates.
|
If you have configured dnf-automatic to only apply security updates on CentOS Stream, it will not install any updates.
|
||||||
|
|
||||||
```conf /etc/dnf/automatic.conf
|
```plain /etc/dnf/automatic.conf
|
||||||
[commands]
|
[commands]
|
||||||
upgrade_type = security
|
upgrade_type = security
|
||||||
```
|
```
|
||||||
|
@ -53,7 +53,7 @@ CentOS used to have updateinfo prior to CentOS 7; after it was removed in CentOS
|
||||||
|
|
||||||
Automatic updates only works in CentOS Stream with this config:
|
Automatic updates only works in CentOS Stream with this config:
|
||||||
|
|
||||||
```conf /etc/dnf/automatic.conf
|
```plain /etc/dnf/automatic.conf
|
||||||
[commands]
|
[commands]
|
||||||
upgrade_type = default
|
upgrade_type = default
|
||||||
apply_updates = yes
|
apply_updates = yes
|
||||||
|
|
|
@ -22,14 +22,36 @@ The recommended logging format according to [Splunk best practice](https://dev.s
|
||||||
The format can be achieved by exporting live event in JSON and append to a log file. However, I encountered a situation where the log file can only be generated by batch. Exporting the equivalent of the previous "example.log" in JSON without string manipulation looks like this:
|
The format can be achieved by exporting live event in JSON and append to a log file. However, I encountered a situation where the log file can only be generated by batch. Exporting the equivalent of the previous "example.log" in JSON without string manipulation looks like this:
|
||||||
|
|
||||||
```json example.json
|
```json example.json
|
||||||
[{"datetime": 1672531212123456, "event_id": 1, "key1": "value1", "key2": "value2", "key3": "value3"}, {"datetime": 1672531213789012, "event_id": 2, "key1": "value1", "key2": "value2", "key3": "value3"}, {"datetime": 1672531214345678, "event_id": 3, "key1": "value1", "key2": "value2", "key3": "value3"}]
|
[
|
||||||
|
{
|
||||||
|
"datetime": 1672531212123456,
|
||||||
|
"event_id": 1,
|
||||||
|
"key1": "value1",
|
||||||
|
"key2": "value2",
|
||||||
|
"key3": "value3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datetime": 1672531213789012,
|
||||||
|
"event_id": 2,
|
||||||
|
"key1": "value1",
|
||||||
|
"key2": "value2",
|
||||||
|
"key3": "value3"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datetime": 1672531214345678,
|
||||||
|
"event_id": 3,
|
||||||
|
"key1": "value1",
|
||||||
|
"key2": "value2",
|
||||||
|
"key3": "value3"
|
||||||
|
}
|
||||||
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
I will detail the required configurations in this post, so that Splunk is able to parse it correctly even though "example.json" is not a valid JSON file.
|
I will detail the required configurations in this post, so that Splunk is able to parse it correctly even though "example.json" is not a valid JSON file.
|
||||||
|
|
||||||
## UF inputs.conf
|
## UF inputs.conf
|
||||||
|
|
||||||
```conf $SPLUNK_HOME/etc/deployment-apps/foo/local/inputs.conf
|
```plain $SPLUNK_HOME/etc/deployment-apps/foo/local/inputs.conf
|
||||||
[monitor:///var/log/app_a]
|
[monitor:///var/log/app_a]
|
||||||
disabled = 0
|
disabled = 0
|
||||||
index = index_name
|
index = index_name
|
||||||
|
@ -60,7 +82,7 @@ Specify an appropriate value in **sourcetype** config, the value will be the val
|
||||||
|
|
||||||
## Forwarder props.conf
|
## Forwarder props.conf
|
||||||
|
|
||||||
```conf props.conf
|
```plain props.conf
|
||||||
[app_a_event]
|
[app_a_event]
|
||||||
description = App A logs
|
description = App A logs
|
||||||
INDEXED_EXTRACTIONS = JSON
|
INDEXED_EXTRACTIONS = JSON
|
||||||
|
@ -92,7 +114,7 @@ If there is a deployment server, then the config file should be in path A, in wh
|
||||||
|
|
||||||
## Search head props.conf
|
## Search head props.conf
|
||||||
|
|
||||||
```conf props.conf
|
```plain props.conf
|
||||||
[app_a_event]
|
[app_a_event]
|
||||||
description = App A logs
|
description = App A logs
|
||||||
KV_MODE = none
|
KV_MODE = none
|
||||||
|
@ -111,14 +133,22 @@ For Splunk Cloud deployment, the above configuration can be added through a cust
|
||||||
It is important to note `SEDCMD` [runs](https://www.aplura.com/assets/pdf/props_conf_order.pdf) [after](https://wiki.splunk.com/Community:HowIndexingWorks) `INDEXED_EXTRACTIONS`. I noticed [this behaviour](https://community.splunk.com/t5/Getting-Data-In/SEDCMD-not-actually-replacing-data-during-indexing/m-p/387812/highlight/true#M69511) when I tried to ingest API response of [LibreNMS](https://gitlab.com/curben/splunk-scripts/-/tree/main/TA-librenms-data-poller?ref_type=heads).
|
It is important to note `SEDCMD` [runs](https://www.aplura.com/assets/pdf/props_conf_order.pdf) [after](https://wiki.splunk.com/Community:HowIndexingWorks) `INDEXED_EXTRACTIONS`. I noticed [this behaviour](https://community.splunk.com/t5/Getting-Data-In/SEDCMD-not-actually-replacing-data-during-indexing/m-p/387812/highlight/true#M69511) when I tried to ingest API response of [LibreNMS](https://gitlab.com/curben/splunk-scripts/-/tree/main/TA-librenms-data-poller?ref_type=heads).
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{"status": "ok", "devices": [{"device_id": 1, "key1": "value1", "key2": "value2"}, {"device_id": 2, "key1": "value1", "key2": "value2"}, {"device_id": 3, "key1": "value1", "key2": "value2"}], "count": 3}
|
{
|
||||||
|
"status": "ok",
|
||||||
|
"devices": [
|
||||||
|
{ "device_id": 1, "key1": "value1", "key2": "value2" },
|
||||||
|
{ "device_id": 2, "key1": "value1", "key2": "value2" },
|
||||||
|
{ "device_id": 3, "key1": "value1", "key2": "value2" }
|
||||||
|
],
|
||||||
|
"count": 3
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
In this scenario, I only wanted to ingest "devices" array where each item is an event. The previous approach not only did not split the array, but "status" and "count" fields still existed in each event despite the use of SEDCMD to remove them.
|
In this scenario, I only wanted to ingest "devices" array where each item is an event. The previous approach not only did not split the array, but "status" and "count" fields still existed in each event despite the use of SEDCMD to remove them.
|
||||||
|
|
||||||
The solution is not to use INDEXED_EXTRACTIONS (index-time field extraction), but use KV_MODE (search-time field extraction) instead. INDEXED_EXTRACTIONS is not enabled so that SEDCMD works more reliably. If it's enabled, the JSON parser can unpredictably split part of the prefix (in this case `{"status": "ok", "devices": [`) or suffix into separate events and SEDCMD does not work across events. SEDCMD does work with INDEXED_EXTRACTIONS, but you have to make sure the replacement is within an event
|
The solution is not to use INDEXED_EXTRACTIONS (index-time field extraction), but use KV_MODE (search-time field extraction) instead. INDEXED_EXTRACTIONS is not enabled so that SEDCMD works more reliably. If it's enabled, the JSON parser can unpredictably split part of the prefix (in this case `{"status": "ok", "devices": [`) or suffix into separate events and SEDCMD does not work across events. SEDCMD does work with INDEXED_EXTRACTIONS, but you have to make sure the replacement is within an event
|
||||||
|
|
||||||
```conf props.conf
|
```plain props.conf
|
||||||
# heavy forwarder or indexer
|
# heavy forwarder or indexer
|
||||||
[api_a_response]
|
[api_a_response]
|
||||||
description = API A response
|
description = API A response
|
||||||
|
@ -134,7 +164,7 @@ SHOULD_LINEMERGE = 0
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
```conf props.conf
|
```plain props.conf
|
||||||
# search head
|
# search head
|
||||||
[api_a_response]
|
[api_a_response]
|
||||||
description = API A response
|
description = API A response
|
||||||
|
|
|
@ -8,7 +8,7 @@ tags:
|
||||||
|
|
||||||
When I first started creating custom Splunk app, I had an incorrect understanding of access control list (ACLs) configured using [default.meta.conf](https://docs.splunk.com/Documentation/Splunk/latest/Admin/Defaultmetaconf) (located at app_folder/metadata/default.meta) whereby I could grant read access to a role like this:
|
When I first started creating custom Splunk app, I had an incorrect understanding of access control list (ACLs) configured using [default.meta.conf](https://docs.splunk.com/Documentation/Splunk/latest/Admin/Defaultmetaconf) (located at app_folder/metadata/default.meta) whereby I could grant read access to a role like this:
|
||||||
|
|
||||||
```conf
|
```plain
|
||||||
[]
|
[]
|
||||||
access = read : [ roleA ], write : [ ]
|
access = read : [ roleA ], write : [ ]
|
||||||
|
|
||||||
|
@ -18,7 +18,7 @@ access = read : [ roleA, roleB ], write : [ ]
|
||||||
|
|
||||||
Or like this:
|
Or like this:
|
||||||
|
|
||||||
```conf
|
```plain
|
||||||
[]
|
[]
|
||||||
access = read : [ roleA ], write : [ ]
|
access = read : [ roleA ], write : [ ]
|
||||||
|
|
||||||
|
@ -42,7 +42,7 @@ None of the above configs will grant roleB read access to lookupB.csv. For the r
|
||||||
|
|
||||||
Notice a role must at least have read access to the app. The simplest way to grant roleB read access is,
|
Notice a role must at least have read access to the app. The simplest way to grant roleB read access is,
|
||||||
|
|
||||||
```conf
|
```plain
|
||||||
[]
|
[]
|
||||||
access = read : [ roleA, roleB ], write : [ ]
|
access = read : [ roleA, roleB ], write : [ ]
|
||||||
```
|
```
|
||||||
|
@ -51,7 +51,7 @@ While the above config is effective, but it does not meet the access requirement
|
||||||
|
|
||||||
roleB can be restricted as such:
|
roleB can be restricted as such:
|
||||||
|
|
||||||
```conf
|
```plain
|
||||||
[]
|
[]
|
||||||
access = read : [ roleA, roleB ], write : [ ]
|
access = read : [ roleA, roleB ], write : [ ]
|
||||||
|
|
||||||
|
@ -71,12 +71,12 @@ It is effective and meets the requirement, but there is an issue. Every new look
|
||||||
|
|
||||||
How to implement default-deny ACL? We can achieve it by separating into two apps: appA is accessible to roleA only, appB is accessible to roleA and roleB. Any object we want to share with roleA and roleB, we put it in appB instead.
|
How to implement default-deny ACL? We can achieve it by separating into two apps: appA is accessible to roleA only, appB is accessible to roleA and roleB. Any object we want to share with roleA and roleB, we put it in appB instead.
|
||||||
|
|
||||||
```conf appA
|
```plain appA
|
||||||
[]
|
[]
|
||||||
access = read : [ roleA ], write : [ ]
|
access = read : [ roleA ], write : [ ]
|
||||||
```
|
```
|
||||||
|
|
||||||
```conf appB
|
```plain appB
|
||||||
[]
|
[]
|
||||||
access = read : [ roleA, roleB ], write : [ ]
|
access = read : [ roleA, roleB ], write : [ ]
|
||||||
```
|
```
|
||||||
|
@ -87,7 +87,7 @@ In this approach, every new objects created in appA will not be accessible to ro
|
||||||
|
|
||||||
I noticed lookup files that have object-level ACL, e.g.
|
I noticed lookup files that have object-level ACL, e.g.
|
||||||
|
|
||||||
```conf
|
```plain
|
||||||
[lookups/lookupC.csv]
|
[lookups/lookupC.csv]
|
||||||
access = read : [ roleA ], write : [ ]
|
access = read : [ roleA ], write : [ ]
|
||||||
```
|
```
|
||||||
|
|
|
@ -350,7 +350,7 @@ Lookup definition provides matching rules for a lookup file. It can be configure
|
||||||
|
|
||||||
A bare minimum lookup definition is as such:
|
A bare minimum lookup definition is as such:
|
||||||
|
|
||||||
```conf transforms.conf
|
```plain transforms.conf
|
||||||
[lookup-definition-name]
|
[lookup-definition-name]
|
||||||
filename = lookup-filename.csv
|
filename = lookup-filename.csv
|
||||||
```
|
```
|
||||||
|
@ -367,7 +367,7 @@ It is imperative to note that lookup definition only applies to `lookup` search
|
||||||
|
|
||||||
### Case-sensitive
|
### Case-sensitive
|
||||||
|
|
||||||
```conf transforms.conf
|
```plain transforms.conf
|
||||||
[urlhaus-filter-splunk-online]
|
[urlhaus-filter-splunk-online]
|
||||||
filename = urlhaus-filter-splunk-online.csv
|
filename = urlhaus-filter-splunk-online.csv
|
||||||
# applies to all fields
|
# applies to all fields
|
||||||
|
@ -403,7 +403,7 @@ index=proxy
|
||||||
|
|
||||||
### Wildcard (lookup)
|
### Wildcard (lookup)
|
||||||
|
|
||||||
```conf transforms.conf
|
```plain transforms.conf
|
||||||
[urlhaus-filter-splunk-online]
|
[urlhaus-filter-splunk-online]
|
||||||
filename = urlhaus-filter-splunk-online.csv
|
filename = urlhaus-filter-splunk-online.csv
|
||||||
match_type = WILDCARD(host_wildcard_suffix)
|
match_type = WILDCARD(host_wildcard_suffix)
|
||||||
|
@ -441,7 +441,7 @@ index=proxy
|
||||||
|
|
||||||
### CIDR-matching (lookup)
|
### CIDR-matching (lookup)
|
||||||
|
|
||||||
```conf transforms.conf
|
```plain transforms.conf
|
||||||
[opendbl_ip]
|
[opendbl_ip]
|
||||||
filename = opendbl_ip.csv
|
filename = opendbl_ip.csv
|
||||||
match_type = CIDR(cidr_range)
|
match_type = CIDR(cidr_range)
|
||||||
|
|
Loading…
Reference in New Issue