Wednesday, November 15, 2017

I got your RBAC in AZURE... or at least one small part.

We went though some "minimal" AZURE training last week.  The trainer guy said we could organize and "bill" other departments using "TAGS".  BUT, some Googling/Binging didn't turn anything up... so here is what I did to create a custom RBAC AZURE role:


Save this file (after you edit the last part, of course) as TAG_reader.json:

{
  "Name": "Tag Reader",
  "IsCustom": true,
  "Description": "Can read tags.",
  "Actions": [
    "Microsoft.Resources/subscriptions/tagNames/read",
    "Microsoft.Resources/subscriptions/tagNames/tagValues/read"
  ],
  "NotActions": [

  ],
  "AssignableScopes": [
    "/subscriptions/PUTYOURSUBSCRIPTIONIDHERE"
  ]
}


then log into AZURE POSH, and run:

New-AzureRmRoleDefinition -inputfile TAG_reader.json


Then assign the person/people to that group.


Friday, July 7, 2017

Advanced Group Policy Management is such a control freak...


One of the things I like about my job is that I do lots of different "enterprisy" things with Microsoft Windows.
This week I had to solve an issue how to not have to create change requests every time someone edits a group policy.  Thankfully, Microsoft has a solution that, while isn't perfect, is "good enough" called Advanced Group Policy Managment (AGPM).

The problem in our environment is we have over 600 GPOs that I needed to "import"/"control" into AGPM... and unless the owner and permissions are just right, they cannot be "controlled."  I found a few people through some searches that were tackling either ownership, or permissions... but not both.  So I present to you:  Set-AGPMRights.ps1

Again, I am not a "full time programmer", so your mileage will vary, and I expect you to review my code before you use it on any production environment.


# Set-AGPMRights.ps1
# Created by Bryan Loveless
# 
# Created June 2017
# This script will set the ownership and correct permissions/ownership for AGPM and 
# will also "take control" of it/them.
# Just change the necessary variables, and away you go.

# References for borrowed code are in the script blocks where used, if they were.





# ONLY CHANGE THE ONE LINE BELOW!!!  (After changing the users during initial config)

# It will support Wildcards (*)

$GPOTARGET = "*"



########################now the script parts not to configure########################################



# get list of all GPOs with that name



$allGPOnames = ((Get-Gpo -all | ? {$_.displayName -like $GPOTARGET }).DisplayName)



# cycle through each one

foreach ($gpo in $AllGPOnames){

# if you found this script online, below is where you would change the "AGPM archive account info"

set-gppermissions -name $gpo -TargetName "YOURDOMAN\YOURSERVICEACCOUNT" -TargetType user -PermissionLevel GpoEditDeleteModifySecurity

set-gppermissions -name $gpo -TargetName "YOURDOMAIN\Domain Admins" -TargetType group -PermissionLevel GpoEditDeleteModifySecurity

}



# now set owner





#Script to change stale or existing owner of GPO using AD DACL  modules

# ref: https://gallery.technet.microsoft.com/scriptcenter/Script-to-Edit-Owner-on-bbba3562



$OwnerNew = "YOURSERVICEACCOUNT" #Name of the Object user or group to be updated



$GPOName = $GPOTARGET #GPO to be updated, This field accepts wildcards,  "*" updates all GPO



#Get all GPOs, filter if required



$AllGPO = Get-GPO -All | ?{$_.DisplayName -like $GPOName}

#$AllGPO = Get-GPO -All



""

"GPO Name"+"           "+ "OwnerBefore"+"                      "+ "OwnerAfter"

"--------"+"           "+ "-----------"+"                      "+ "----------"



foreach ($gp in $AllGPO){



  #"GPO Name: " + $gp.DisplayName



    #Get the GUID and add wild*



    #Get-GPO "TestGPO"

    $gpId = "*"+($gp).Id+"*"



    #Store the GPO AD Object in a variable



    $Gpo1 = get-adobject -Filter {Name -like $gpId}



    #Store the new Owner in a  variable as well (Note changes for group and user accounts)

        #$Ownr = New-Object System.Security.Principal.SecurityIdentifier (Get-ADGroup "Domain Admins").SID

        #$Ownr = New-Object System.Security.Principal.SecurityIdentifier (Get-ADUser "USer1").SID



    #Generic Cmdlet to get User or Group

    $Ownr = New-Object System.Security.Principal.SecurityIdentifier (Get-ADObject -Filter {Name -like $OwnerNew} -Properties objectSid).objectSid

    #$Ownr = New-Object System.Security.Principal.SecurityIdentifier (Get-ADObject -Filter {Name -like "User1"} -Properties objectSid).objectSid



    #Copy the DACL for the GPO object to be modified in a variable



    $Acl = Get-ACL -Path "ad:$($Gpo1.DistinguishedName)"



    #Validate the currect owner (- can be skipped in when in a script)

  

    #"Before:"

    $aclBefore = $Acl.GetOwner([System.Security.Principal.NTAccount]).Value



    #Edit Owner on a GPO using Powershell to new Owner

    $Acl.SetOwner($Ownr)



    #Note changes are not yet commited, we have made changes only to the variable data not the actual object

    #"Ready:"

    #$Acl.Owner



    #Commit the changes on the variable to the -Path actual object

    Set-ACL -Path "ad:$($Gpo1.DistinguishedName)" -ACLObject $Acl



    #"After:"

    #Get actual data, not from the old variable to confirm change has been made:

    $aclafter = (Get-ACL -Path "ad:$($Gpo1.DistinguishedName)").Owner



  



$gp.DisplayName+"           "+ $aclBefore+"           "+ $aclafter       



}





#Now add the GPO to the archive

#more on this command at https://technet.microsoft.com/itpro/powershell/mdop/agpm/add-controlledgpo

foreach ($gp in $AllGPO){

Add-ControlledGPO $gp

}





# let person know when script finished, and they need to wait for it to propagate

write-host ""

write-host "this script finshed:"

get-date

write-host ""

write-host "THIS MAY TAKE UP TO 15 MINUTES TO FINISH" -ForegroundColor Red

write-host ""



Tuesday, July 12, 2016

Splunk and Tor exit nodes

We wanted a way to figure out if a "bad actor" was using TOR to connect to our servers/resources. So I thought: "SPLUNK TO THE RESCUE!"

I am a Windows guy, so I wrote a Powershell script to retrieve a list of TOR exit nodes and write them to a file.  Then I use SPLUNK to pick up that file, index it, and extract the interesting fields to then use in other SPLUNK dashboards/reports/whatever.

--------------------------------------------

automated task:

program/script: powershell.exe
Add arguments: -file "C:\SplunkInput\Scripts\Get-TorExitNodeList.ps1" -NoProfile
Start in: C:\SplunkInput\Scripts\

---------------------------------

POSH Script:

<#

.SYNOPSIS



            Gets the list of tor exit nodes from the TOR project, put them in a file







            .DESCRIPTION



            Gets the list of tor exit nodes from the TOR project, put them in a file for other use (like SPLUNK). 

            Will by default write to "C:\SplunkInput\TorExitNode\TorExitNode.txt"



         



            .EXAMPLE



           ./Get-TorExitNodeList.ps1







            .NOTES



            I recommend you run this as a scheduled task, every 5 min or so, as the list changes often.

            I recommend you use a SPLUNK forwarder to take this file and index it.

            Created by Bryan Loveless

            Bryan.Loveless@gmail.com

            July 2016



#>





#cleanup the old log files
remove-item C:\SplunkInput\TorExitNode\TorExitNode*.txt

#get the current date/time to create a new file
$now = (Get-Date).ToString("s").Replace(":","-")
$file = "C:\SplunkInput\TorExitNode\TorExitNode" + $now + ".txt"

#request the list of exit nodes, appending them to the file created above.
((invoke-webrequest -uri https://check.torproject.org/exit-addresses -UseBasicParsing).rawcontent) | out-file $file -Append




------------------------------------------------------------------

For the SPLUNK forwarder, input.conf:
[monitor://C:\SplunkInput\TorExitNode]
crcSalt = <SOURCE>
#initCrcLength = 4096
disabled = 0
sourcetype = TorExitNodeList
index = tor

------------------------------------------------------------

For the SPLUNK field extraction, props.conf:

[TorExitNodeList]
DATETIME_CONFIG =
NO_BINARY_CHECK = true
category = Custom
description = Tor Exit Node List
disabled = false
pulldown_type = true
HEADER_FIELD_LINE_NUMBER=14
#LINE_BREAKER = \bLastStatus\b
SHOULD_LINEMERGE = True
BREAK_ONLY_BEFORE = ExitNode
EXTRACT-ip-torexitnode = ^\w+\s+(?P<ip>[^ ]+)
EXTRACT-Last_Checkin_Date,Last_Checkin_Time = ^(?:[^ \n]* ){2}(?P<Last_Checkin_Date>[^ ]+)\s+(?P<Last_Checkin_Time>\d+:\d+:\d+)

-------------------------------------------------------------------

References:
http://wiki.splunk.com/Community:Troubleshooting_Monitor_Inputs
http://docs.splunk.com/Documentation/Splunk/6.0.1/Data/Howlogfilerotationishandled



Wednesday, May 18, 2016

Visual Studio POSH snippet, DocuComment

I am trying to get into more Visual Studio (as that is what "real" programmers use).... and to my delight, I found out that PowerShell is more supported than ever before in VS 2015.  Perhaps I can finally put away my childish things (ISE) and move onto something much more unnecessarily complicated.   So when I read about "code snippets" I was excited... but....

 No matter how much I searched, I could not find one for a simple "documentation block."  You know, the one with the ".Synopsis", and ".Example" that we are supposed to use in our scripts, but never do?  Perhaps I can make the world a better place by figuring it out and allowing others to use it.  Now you have no excuse to not document your code better, you are just three clicks away from having the "DocuComment" block created for you!

You will need to put the code below in to a file with an extension of ".snippet" and import it into VS.
(Details of this can be found at: https://msdn.microsoft.com/en-us/library/9ybhaktf(v=vs.100).aspx )

DocuComment.snippet :

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
  <CodeSnippet Format="1.0.0">
    <Header>
      <Title>DocuComment Block</Title>
      <Shortcut>docucomment</Shortcut>
      <Description>Code snippet for a comment block.</Description>
      <Author>Bryan Loveless bryan.loveless@gmail.com</Author>
      <SnippetTypes>
        <SnippetType>Expansion</SnippetType>
        <SnippetType>SurroundsWithStatement</SnippetType>
      </SnippetTypes>
    </Header>
    <Snippet>
      <Declarations>
        <Literal>
          <ID>param1</ID>
          <ToolTip>DocuComment</ToolTip>
          <Default>
            .SYNOPSIS

            synopsis of script, overall idea



            .DESCRIPTION

            description of overall script, more detail than synopsis



            .PARAMETER parameternamehere

            parameter description, if required, possible values



            .PARAMETER path

            parameter path



            .EXAMPLE

            example of script use, return behavior.



            .EXAMPLE

            another exmample if there are more.  This can be repeated for as many examples as you want



            .NOTES

            other misc notes, perhaps permissions needed, dates of script.
          </Default>
        </Literal>
      </Declarations>
      <References />
      <Code Language="PowerShell">
        <![CDATA[<#
$param1$
$selected$ $end$
#>]]>
      </Code>
    </Snippet>
  </CodeSnippet>
</CodeSnippets>

Wednesday, December 16, 2015

The Netscaler is hiding stuff from you...

I have been thinking recently about how to hid my infrastructure info from the public, and one easy way is to stop telling the world what type of webserver you are running.  Now I am not going to get into the discussion of whether or not "security through obscurity" works... but this is so easy, even if it hinders some script kiddies, I will be happy.

There are lots of ways to see the response headers from your webserver, and I found a scanner that will tell you that and a bit more:  https://securityheaders.io I ran some of my URLs though the device, and sure enough, they are blabbing to the world what versions of whatever it has.... 

So instead of trying to figure out how to get all of my webservers to shut up, I decided to use the Netscaler to just remove the headers before they are presented to the client.  I must admit, it was pretty easy... from the CLI.  When I tried it from the GUI, I had a strange message and didn't want to fuss around with it much more.

 -----------------------------

Remove "Server" header:

add rewrite action Delete_server_header_action delete_http_header Server -bypassSafetyCheck YES -comment "This will delete the Server Header field from Server's response before sending to client"

add rewrite policy Delete_server_header_policy "HTTP.RES.HEADER(\"Server\").EXISTS" Delete_server_header_action -comment "This will delete the Server header field from server\'s response before sending to client"

Now to remove "x-powered-by" header:

add rewrite action Delete_x-powered-by_header_action delete_http_header X-Powered-By -comment "This will delete the X-Powered-By Header field from Server's response before sending to client"

add rewrite policy Delete_x-powered-by_header_policy "HTTP.RES.HEADER(\"X-Powered-By\").EXISTS" Delete_x-powered-by_header_action -comment "This will delete the X-Powered-By header field from server\'s response before sending to client"

-------------------------

then bind them both to your Content Switching Virtual Server, give it priority of 85 (in my case I had a few others I want to run afterwards), and change "goto expression" to "NEXT"


Easy, right?  Now run your test again, and those headers are now missing... of course, you could replace the headers with something fun, like "X-Powered-By: The Dark Side" or whatever.... but I am not sure my employer would appreciate the humor as much as I would.

Friday, October 16, 2015

OnBase isn't.



Executive summary:  OnBase has terrible technical support, and the simplest things are either not supported or are not possible. (oh, but I found out they have a slide)


I had a ticket to push out using Group Policy a change to the ODBC settings for OnBase users, as the back-end database server was being moved and upgraded.  Simple right?

This:

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\OnBaseProd]
"Driver"="C:\\Windows\\SysWOW64\\sqlncli10.dll"
"Description"="OnBaseProd"
"Server"="serverA.company.blah "
"Database"="OnBase"


Is replaced through group policy to:

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\OnBaseProd]
"Driver"="C:\\Windows\\SysWOW64\\sqlncli10.dll"
"Description"="OnBaseProd"
"Server"="serverB.company.blah "
"Database"="OnBase"


Seems pretty straightforward.  Notice that only one line was changed, and it was the server name.  (This was also done for the 32bit side too, after discovering that OnBase can ONLY use the 32bit drivers.)

(Disclaimer:  I was not around for the initial installation of the OnBase, I am not the OnBase admin, I am not the OnBase DBA.  I am a Server Admin who takes care of Active Directory, GPOs, and lots of other non-OnBase items.  My views do not necessarily reflect those of my employer...)

A few hours later, users get the GPO, and OnBase doesn't work.  They keep getting prompted for a username.  We tested this numerous times for months, this hasn't happened in our test environment.

We look around the new DB server, all security settings, users, and roles are the same.  It appears that OnBase security is based upon (at least at the DB connection level) the hostname of the connecting client.  No prob we thought, as you can read above, has not changed.  We added the AD user group to the same role as the machines... nothing.

After fusing around a bit, we decide to call OnBase Tech support.  I was surprised by the lack of support... and I have worked with Adobe tech support.

Me:  We changed ODBC server name, clients are prompted to log in... but it doesn't work, did we miss something?
OnBase:  You have to log in with the user HSI.
Me:  But the DB user HSI is a database owner.
OnBase:  Yeah.

<a few min later as I clarify with him>
Me:  So you want me to figure out how to push a database owner username and clear text password to 5,000 machines?
OnBase:  Yes, that is the only way to make the initial connection to the DB for the Thick Clients.
<I put the phone on mute while my partner talks to the Tech support guy and I bang my head against the wall>
<Also, to get an idea of the type of security OnBase uses, their "network security" password that is hard coded into the server app is "ROMANZO" as documented.>

Me:  So what about if we create a new user in the DB, one that is not the DB owner (and therefore cannot delete all the tables in the database, give that user permissions to only edit the necessary tables that are needed for the initial setup?
OnBase:  That won't work, as the OnBase Thick client is hard coded to only accept the username HSI.  You have to use that username or it wont work.
<I think to myself, are you effin' kidding me?>

Me:  Ok, what about if I remove most of the permissions for HSI, so it is not an owner of the DB (And capable of destroying years of work), so that HSI is only used for the initial install for clients?
OnBase:  Changing any of the Database backend will violate our contract with you and we will not support you anymore.

<I am not sure how we can get any less support at this point in time... time to frame it a different way, perhaps it is a communication problem... as sprinkled throughout this conversation, he kept saying it is a Microsoft ODBC problem>
Me:  Ok, let's say I am a multinational corporation, and I want to install your product remotely to thousands of clients across the world.... how would I do that?
OnBase:  We don't support installation of our clients.
<this was then confirmed by our OnBase sysadmin who said they had to write a custom installer to copy files, write registry keys, create shortcuts, etc..... I couldn't believe it.>

After trying a few more perspectives, I realized that every machine was going to have to be touched by a person who is trustworthy enough to know the password to the database owner user.

So like everyone else who is out of ideas, I decided to take to Twitter:



Hey, I recieved a message from "+OnBase by Hyland " asking me to direct message them, so I do.


I wrote them back:  (Times are approx and local, timespans are not)


Bryan
Hello,  Could we talk via phone, my number is xxx-xxx-xxxx.... have a meeting in about 30 min or so, then back after 1:30ish
(10:20AM Thurs)

 OnBase by Hyland 
 Bryan, (Name of person who if find out is our sales rep) will be reaching out to you shortly. Thank you.
(11:20AM Thurs)

 Bryan
 great, it is 2:00 XX time as I write this
(2:00PM Thurs)

 Bryan
 or, you can e-mail me at mywork.e-mailaddresshere to schedule some time or start an e-mail thread
(2:00PM Thurs)

 OnBase by Hyland
 Sounds good! We will be in touch soon.
(2:00PM Thurs)

 Bryan
 I am leaving, as I have been here since 6am or so.... I am here normally 8:-4:30ish AZ time
(4:40PM Thurs)

 OnBase by Hyland 
 (Name of person who if find out is our sales rep) has been in touch with your Sys Admin to resolve any issues. Thanks for reaching out!
(5:30AM Fri)

 Bryan
 I am the server admin.  Was there a solution other than handing out the HSI username and password to every user?
(6:30AM Fri)

 OnBase by Hyland 
 Let us look into this and get back to you. Thanks!
(6:30AM Fri)



So here is a multi-million dollar software company who has technical support reps who told us "we were not trained on that, " or "I don't see anything in the manual about that," requires normal users to log in using the database owner account, and doesn't have a way to deploy their  "thick clients" using the number one business software in the world (Microsoft Active Directory)?

Hey, but at least they have a corporate slide I guess:
https://en.wikipedia.org/wiki/Hyland_Software#/media/File:Hyland_redslide.jpg


--Bryan


Tuesday, August 4, 2015

Netscaler cert... damn thing hung up on me again

Netscaler has a strange GUI that I think was designed as an "afterthought" by the developers.  The more you use it, the more you try to figure out why stuff is in the order it is, or the grouping it is in.  Sometimes the Netscaler will perform an operation, and drop your connection without warning.  So here is how to install Certificates, which might end in a dropped connection when associating a cert to the device management interface.

Request a certificate as you would normally do, using IIS.  This has been documented plenty of other places, so skipped here. 

Because of the HA pair, you will need one cert, but make it good for 2 DNS names, including NS.blah.com and NS-Otherlocation.blah.com

This is also good for moving a site's SSL certificate to the Netscaler for load balancing from an IIS host.

Visit http://www.derekseaman.com/2013/05/import-iis-ssl-certificate-to-citrix-netscaler.html on how to export this new certificate into the Netscaler UNTIL the section where you have to upload it to the NETSCALER
(Mr Derek Seaman's instructions are good, but not for our NS version.  You can probably figure it out with clicking around, but just in case:)
 At this point, on the Netscaler, you select Traffic management --> ssl and "import PKCS#12"
Most of Mr Seaman's instructions will still work, but things may be very slightly out of order, like the order to click "browse" or whatever... but it is much easier with his diagrams than I can explain here.  Remember to use a good password manager to generate and store any passwords you use in this process.
When you are finished, the cert is ready to be used with your VIP.

IF YOU ARE INSTALLING THE CERT FOR THE NETSCALER DEVICE ITSELF:
Skip the step above where you upload the certificate, or remove it from the Netscaler if you have already uploaded it.

Download the "X509 Certificate only, Base64 encoded" file and open it in a text editor.
blah_com_cert.cer

Download the "X509 Intermediates/root only Reverse, Base64 encoded: " file and open it as well.
blah_com_interm.cer

Create a new text file.  Copy the X509 Certificate only, Base64 encoded  cert to it first and then copy the NEXT two X509 Intermediates/root only Reverse, Base64 encoded certs from the file below the first. (They will be the first two in the blah_com_interm.cer file. The root is the last one in that file and you don't want it.) Now save the file with a meaningful name, like "blah_com-bun-noroot.crt".  (bun=bundle)

Login into the NS and upload your cert bundle and private key. In this example they would be blah_com.key and blah_com-bun-noroot.crt.
Then under SSL/Certificates select the ns-server-certificate and update it. There's a check box on the Update Certificate window that says, Click to update Certificate/Key. Select that and then browse for the two files you just uploaded.  Also check the box "no domain check" if you are switching domain suffixes . 
 After clicking "ok", wait a min or two... then you will have to reconnect your browser.
Check the certificate in the browser, it should list new certificate.
 
You do not have to modify the other Node in the HA pair, the HA standby member gets updated automatically.

You can diagnose/view the certs you uploaded using "shell" and "openssl x509 -in NAMEOFFILE.cer -text -noout"

Happy balancing,
-_Bryan