{"id":993,"date":"2013-05-17T19:49:08","date_gmt":"2013-05-17T23:49:08","guid":{"rendered":"https:\/\/infotechguy.net\/?p=993"},"modified":"2025-02-22T11:08:13","modified_gmt":"2025-02-22T16:08:13","slug":"squid-3-1-caching-proxy-with-ssl","status":"publish","type":"post","link":"https:\/\/infotechguy.net\/?p=993","title":{"rendered":"Squid Proxy &#8212; Caching Proxy with SSL with Squid3.1"},"content":{"rendered":"<p>Hello, hello! Recently I posted a two part article on creating a <a title=\"Enabling a Guest WiFi Network\" href=\"https:\/\/infotechguy.net\/home-projects\/multiple-access-points-over-802-1q-using-openwrt\/\" target=\"_blank\" rel=\"noopener noreferrer\">Guest wireless network using OpenWRT<\/a>, VLANs, and Firewall rules. Now we left things kinda open from a security standpoint. WE gave our Guest users full Internet access with no restrictions on sites, bandwidth usage, or ports!! Yikes! For this article I am going to walk you through the steps to close those gaps. We are going to first configure a Web Proxy server that will proxy outbound Internet connections. This allows us to check where and what are Guests are trying to get their hands on. Good and bad. We will also force Guests to connect to this Web Proxy server transparently. What I mean by that is the Guests will not be required to do anything on their side to connect, our firewall will take care of that. And lastly, I want only allow limited bandwidth of HTTP traffic. You will see later on how we can accomplish this. I&#8217;ve expanded upon <a title=\"Network Adblocking using Squid, SquidGuard, and IPtables\" href=\"https:\/\/infotechguy.net\/network-adblocking-using-squid-squidguard-and-iptables\/\" target=\"_blank\" rel=\"noopener noreferrer\">this article<\/a> of mine that uses squid proxy to filter Ads.<br \/>\n<!--more--><br \/>\nPhew, let&#8217;s get started.<\/p>\n<h3>Installing Squid Proxy<\/h3>\n<ol>\n<li>\n<h4>Install dependencies<\/h4>\n<p>The easiest way to install dependencies on Ubuntu or another Debian based Linux server is to use the <strong>apt-get build-dep<\/strong> and <strong>apt-get source<\/strong> commands. We are installing Squid from source, because the default Squid package in the Ubuntu repositories doesn&#8217;t have configuration items we need to make our project work.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apt-get install build-essential fakeroot devscripts gawk gcc-multilib dpatch \napt-get build-dep squid3 \napt-get build-dep openssl \napt-get source squid3<\/pre>\n<p><strong>NOTICE:<\/strong> <em>We just installed the essential build tools, along with any Squid dependencies, and the source files.<\/em><\/p>\n<p>You should now have a <strong>squid3-3.1.19 folder<\/strong>.<\/li>\n<li>\n<h4>Modify the build script<\/h4>\n<p>We need to modify the build script. This will make it so when we go to configure the source files it will include SSL support.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vi squid3-3.1.19\/debian\/rules<\/pre>\n<p>Add <strong>&#8211;enable-ssl<\/strong> under the <strong>DEB_CONFIGURE_EXTRA_FLAGS<\/strong> section.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">...\nDEB_CONFIGURE_EXTRA_FLAGS := --datadir=\/usr\/share\/squid3 \n                --sysconfdir=\/etc\/squid3 \n                --mandir=\/usr\/share\/man \n                --with-cppunit-basedir=\/usr \n                --enable-inline \n                --enable-ssl \n...\n<\/pre>\n<p><em>Don&#8217;t forget to save!<br \/>\n<\/em><\/li>\n<li>\n<h4>Configure, Make, Make Install<\/h4>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">cd squid3-3.1.19\/ \ndebuild -us -uc -b<\/pre>\n<p>&nbsp;<\/p>\n<p><strong>NOTICE:<\/strong> <em>Here we use debuild that will automatically Configure, Make, and create an installable DEB package.<\/em><\/p>\n<p>After this has been completed a DEB package will appear in the parent directory. Mine was called <strong>squid3_3.1.19-1ubuntu3.12.04.2_amd64.deb<\/strong>, <strong>squid3-common_3.1.19-1ubuntu3.12.04.2_all.deb<\/strong><br \/>\nand <strong>squid3-dbg_3.1.19-1ubuntu3.12.04.2_amd64.deb<\/strong><\/p>\n<p>Install them:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">dpkg -i squid3_3.1.19-1ubuntu3.12.04.2_amd64.deb squid3-common_3.1.19-1ubuntu3.12.04.2_all.deb squid3-dbg_3.1.19-1ubuntu3.12.04.2_amd64.deb<\/pre>\n<\/li>\n<li>\n<h4>Verify Squid Installation<\/h4>\n<p>For this step we just need to make sure that Squid properly installed itself and has SSL support.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">squid3 -v<\/pre>\n<p>Look for the <strong>version number 3.1.19<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">squid3 -v |grep ssl<\/pre>\n<p>Look for the <strong>&#8211;enable-ssl<\/strong> item.<\/li>\n<\/ol>\n<h3>Configuring Squid Proxy<\/h3>\n<ol>\n<li>\n<h4>Copy the Squid3.conf file<\/h4>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">cp \/etc\/squid3\/squid3.conf \/etc\/squid3\/squid3.conf.bak<\/pre>\n<\/li>\n<li>\n<h4>Preparing Squid<\/h4>\n<p>We need to prepare the cache directory for Squid.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">\/home\/usermkdir squidcache\nchown proxy. squidcache<\/pre>\n<p>This will make a new directory in our home folder called squidcache with the user proxy ownership.<\/li>\n<li>\n<h4>Initializing Squid Cache<\/h4>\n<p>Ensure squid is not running first!<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">service squid3 stop<\/pre>\n<p>Initialize cache&#8230;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">quid3 -z<\/pre>\n<\/li>\n<li>\n<h4>Edit the squid3.conf file<\/h4>\n<p><strong>NOTICE: <\/strong>I highly recommend taking a look at Squid3 Documentation, <a href=\"http:\/\/www.squid-cache.org\/Doc\/config\/\" target=\"_blank\" rel=\"noopener noreferrer\">here<\/a>.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vi \/etc\/squid3\/squid3.conf\n#.................................\n#Access Lists\nacl manager proto cache_object\nacl localhost src 127.0.0.1\/32 ::1\nacl home_network src 192.168.0.0\/24\nacl guest_network src 192.168.1.0\/24\n\n#Ports allowed through Squid\nacl Safe_ports port 80 #http\nacl Safe_ports port 443 #https\nacl SSL_ports port 443\nacl SSL method CONNECT\nacl CONNECT method CONNECT\n\n#allow\/deny\nhttp_access allow localhost\nhttp_access allow home_network\nhttp_access allow guest_network\nhttp_access deny !Safe_ports\nhttp_access deny CONNECT !SSL_ports\nhttp_access deny all\n\n#proxy ports\nhttp_port {proxy_server_IP}:3128\nhttp_port {proxy_server_IP}:8080 intercept\n\n#caching directory\ncache_dir ufs \/home\/user\/squidcache\/ 2048 16 128\ncache_mem 1024 MB\n\n#refresh patterns for caching static files\nrefresh_pattern ^ftp: 1440 20% 10080\nrefresh_pattern ^gopher: 1440 0% 1440\nrefresh_pattern -i .(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private\nrefresh_pattern -i .(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-private\nrefresh_pattern -i .(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private\nrefresh_pattern -i .index.(html|htm)$ 0 40% 10080\nrefresh_pattern -i .(html|htm|css|js)$ 1440 40% 40320\nrefresh_pattern . 0 40% 40320\n\n#nameservers\ndns_nameservers {your DNS server IP}\n<\/pre>\n<p>Let&#8217;s step through this:<\/p>\n<ul>\n<li><strong>acl<\/strong> = this tells squid which IP address and\/or hosts to assign a certain Access List. For example home_network is any IP sourcing from 192.168.0.0\/24 network.<\/li>\n<li><strong>Safe_ports<\/strong> = tells squid which ports are allowed through the proxy, we have defined only 80 and 443.<\/li>\n<li><strong>SSL_ports<\/strong> = tells squid which port is allowed when making an SSL connection<\/li>\n<li><strong>http_access<\/strong> = defines which Access Lists (acl) are allowed to connect to the proxy.<\/li>\n<li><strong>http_port<\/strong> = binds an IP and port of the Proxy server to listen for requests. We have two because one will be used for the Transparent Proxy, the other for browsers who explicit configure their browsers to connect to the proxy server.<\/li>\n<li><strong>intercept<\/strong> = is required for transparency to work.<\/li>\n<li>cache_dir = defines where squid should store cached static files, how much space should it consume, and for how long. <strong>ufs<\/strong> is the type of storage system, <strong>\/home\/user\/squidcache\/<\/strong> is the directory to use, <strong>2048<\/strong> defines 2048MB of capacity, <strong>16<\/strong> defines number of first-level subdirectories, and <strong>128<\/strong> defines second-level subdirectory. For more info, <a href=\"http:\/\/www.squid-cache.org\/Doc\/config\/cache_dir\/\" target=\"_blank\" rel=\"noopener noreferrer\">see here<\/a>.<\/li>\n<li><strong>cache_mem<\/strong> = defines how much memory should be allocated for Squid caching.<\/li>\n<li>&lt;strong_refresh_patter = used to assign expirations, storage, retention, etc of static files. <a href=\"http:\/\/archive09.linux.com\/feature\/153221\" target=\"_blank\" rel=\"noopener noreferrer\">More info, here.<\/a><\/li>\n<li><strong>dns_nameserver<\/strong> = defines the nameserver for squid to use, rather than those in the \/etc\/resolve.conf file. Recommend these be your external or DMZ DNS servers.<\/li>\n<\/ul>\n<\/li>\n<li>\n<h4>Testing<\/h4>\n<p>Assuming that your proxy server has path out to the internet, configure your browser to use a proxy server and point it at your Squid server. Use the instance running on port 3128<\/p>\n<p>Success!<\/li>\n<\/ol>\n<h3>Firewall rules and Redirects<\/h3>\n<ol>\n<li>\n<h4>Enable Proxy server Internet Access<\/h4>\n<p>If not already done, proceed in giving the Proxy Server internet access via our Linux Router.<\/p>\n<p>On the Linux Router:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">iptables -t nat -I POSTROUTING -s {yourProxyIP}\/32 -m multiport --dports 80,443 -j MASQUERADE<\/pre>\n<p><strong>NOTICE:<\/strong> <em>This assumes that your router is connected with a public interface, or an interfcae that will be route to the internet.<\/em><\/li>\n<li>\n<h4>Quick test<\/h4>\n<p>From the Proxy server<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">telnet cnn.com 80\n...\nGET \/<\/pre>\n<p>This should return some html.<\/li>\n<li>\n<h4>Transparent redirects<\/h4>\n<p>On my Linux Router I have 2 interfaces. <strong>eth0<\/strong> = Internet\/Public Interface and <strong>eth1<\/strong> = Trunk containing both 192.168.0.0\/24 and 192.168.1.0\/24 networks.<\/p>\n<p>We need to filter traffic coming from these networks who&#8217;s destations are outbound to the internet. The easiest way to filter is based on destination port and source IP\/Network.<\/p>\n<p>On our Linux Router:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">iptables -t nat -A PREROUTING -s 192.168.0.0\/24 ! -d {squid-server-IP}\/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination {squid-server-IP}:8080\niptables -t nat -A PREROUTING -s 192.168.1.0\/24 ! -d {squid-server-IP}\/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination {squid-server-IP}:8080\niptables -t nat -A PREROUTING -s 192.168.0.0\/24 -! -d {squid-server-IP}\/32 -p tcp -m tcp --dport 443 -j DNAT --to-destination {squid-proxy-IP}:8443\niptables -t nat -A PREROUTING -s 192.168.1.0\/24 -! -d {squid-server-IP}\/32 -p tcp -m tcp --dport 443 -j DNAT --to-destination {squid-proxy-IP}:8443\n<\/pre>\n<p>We must also add FORWARD rules:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">iptables -A FORWARD -s 192.168.0.0\/24 -d {proxy-server-ip}\/32 -m multiport --dports 80,443 -j ACCEPT\niptables -A FORWARD -s 192.168.1.0\/24 -d {proxy-server-ip}\/32 -m multiport --dports 80,443 -j ACCEPT<\/pre>\n<\/li>\n<li>\n<h4>Testing<\/h4>\n<p>With the previous rules in place, any client on either the 192.168.0.0\/24 or 192.168.1.0\/24 networks should be transparently redirected to the proxy server. To test, reconfigure any proxy settings you may have in your browser and then try to connect to an internet site directly over http:\/\/ then over https:\/\/<\/li>\n<\/ol>\n<h3>Throttling and Filtering Traffic<\/h3>\n<p>It is imperative that you verify the functionality of the previous steps before continuing. In this section we are going to limit the bandwidth consumed by only our Guest Network clients, and also do some basic filtering of requests, such as blocking known malicious sites. We will use the <a href=\"http:\/\/wiki.squid-cache.org\/Features\/DelayPools\" target=\"_blank\" rel=\"noopener noreferrer\">delay_pool<\/a> feature of Squid to perform this throttling and <a href=\"http:\/\/www.squidguard.org\/\" target=\"_blank\" rel=\"noopener noreferrer\">SquidGuard<\/a> to perform filtering.<\/p>\n<ul>\n<li>\n<h4>Adding a Delay Pool<\/h4>\n<p>A delay pool is a feature of Squid that will allow you to delay outbound requests from users based on conditions. For our purposes I wanted to limit Guest Network users to a flat rate to 100Kb\/s. This would ensure that any Guest Network user would not be able to completely saturate my totally bandwidth from my ISP.<\/p>\n<p>Let&#8217;s edit our squid.conf file:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vi \/etc\/squid3\/squid.conf<\/pre>\n<p>Add the following lines:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">#delay pools\ndelay_pools 1 # how many delay pools will be defined\ndelay_class 1 2\ndelay_access 1 allow guestNet\ndelay_parameters 1 1048576\/1045876 102400\/102400\ndelay_access 1 deny all\n<\/pre>\n<p>Let&#8217;s walk through this&#8230;<\/p>\n<ul>\n<li><strong>delay_pools 1<\/strong> &#8212; Denotes how many delay pools we will define in the squid.conf<\/li>\n<li><strong>delay_class 1 2<\/strong> &#8212; This matches a delay pool class to a delay pool. The 1 is our delay pool number to match, and the number 2 is the type of delay class.<a href=\"http:\/\/wiki.squid-cache.org\/Features\/DelayPools\" target=\"_blank\" rel=\"noopener noreferrer\"> See here <\/a>for more info on delay classes.<\/li>\n<li><strong>delay_access 1<\/strong> &#8212; This is a standard access list. It defines which ACL from the top of our squid.conf will be associated with this delay pool. 1 signifies the delay_pool to associate the ACL with.<\/li>\n<li><strong>delay_parameters 1<\/strong> &#8212; Here is were we define the parameters for the delay class, specifically reducing bandwidth consumption to a flat 100Kb\/s. The units are in bytes. The first part (1048576\/1045876 1mb) denotes the max bandwidth allocated to this delay pool. The second part(102400\/102400 or 100kb) is max bandwidth for each client within the ACL. This will help prevent one user form hogging all the bandwith from the rest of our users.<\/li>\n<li>delay_access 1 &#8212; The last part here says to deny all other ACLs to this delay pool 1<\/li>\n<\/ul>\n<p>NOTICE: <em>Don&#8217;t forget to restart Squid!<\/em><\/li>\n<li>\n<h4>SquidGuard Filtering<\/h4>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apt-get install squidGuard -y<\/pre>\n<p>Once squidGuard is installed we need to tell Squid to use it. Once again, edit the squid.conf file:<br \/>\n&lt;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vi \/etc\/squid3\/squid.conf<\/pre>\n<p>Add the following lines:<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">#rewrite program squidGuard\nurl_rewrite_program \/usr\/bin\/squidGuard -c \/etc\/squid\/squidGuard.conf\nurl_rewrite_children 20 #threads\nurl_rewrite_concurrency 0 #jobs per threads\n<\/pre>\n<ul>\n<li><strong>url_rewrite_program<\/strong> &#8212; Defines the rewrite program we will use, in this case squidGuard with the -c means to use this squidGuard.conf file.<\/li>\n<li><strong>url_rewrite_children 20<\/strong> &#8212; Defines how many child processes or threads to open. This varies on how many users you have as well as the restriction of the proxy server itself.<\/li>\n<li><strong>url_rewrite_concurrency<\/strong> &#8211;This tells how many squidGuard jobs can run per thread. Be careful with this as it will increase by a factore of the previous paramenter<\/li>\n<\/ul>\n<p>Adding blocklists:<br \/>\nCreate a folder were you will keep the blocked information files.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">mkdir ~\/blocklists\/\nvi blocked-domains<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">...ommitted...\n{bad domain name}\n...omitted...\n<\/pre>\n<p>&#8230;&#8230;.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vi blocked-url\n...omitted...\n{bad ip site}\n{bad url}\n...omitted..\n<\/pre>\n<p>Then&#8230;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">chown proxy. *<\/pre>\n<p><strong>NOTICE: <\/strong>We just created two block files. One containing domain names, such as yahoo.com, facebook.com. The other containing URLs, such as 1.2.3.4, or 5.6.7.8\/badstuff. Then we had to change the ownership to the proxy user so Squid and SquidGuard can read it.<\/p>\n<p><strong>Next:<\/strong><br \/>\nYou may have noticed the <strong>\/etc\/squid\/squidGuard.conf<\/strong> from the above step. Lets create\/edit that file with squidGuard specific options.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vi \/etc\/squid\/squidGuard.conf\n.................................\ndbhome \/home\/usr\/blocklists\/\n\n#define src\nsrc guests {\n         ip        192.168.1.0\/24\n}\n#define category 'deny'\ndest badsites {\n        domainlist domains\n        urllist urls\n        expressionlist expressions\n}\nacl {\n        guests {\n                #allow all except badsites\n                pass !badsites all\n                #redirect\n                redirect http:\/\/{webserver}\/deny.html\n        }\n<\/pre>\n<p><strong>Lastly:<\/strong><br \/>\nInitialize the block lists<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">squidguard -C all<\/pre>\n<\/li>\n<\/ul>\n<p><strong>NOTICE:<\/strong> You will have to run <strong>&#8220;squidguard -C all&#8221;<\/strong> each time you modify the files. This will update the .db files squidguard creates.<\/p>\n<p><strong>Further notes:<\/strong> <em>The biggest issue with squidguard is it is very picky bout the blocklist file. Each item should be on a new line without leading or trailing spaces. And make sure both the blocklist and the blocklist.db files are readable by Squid and SquidGuard. Also, I believe there is an issue with the current SquidGuard build while trying to filter based on source IP with Transparent setups. It seems, each request being handed to squidGuard from Squid would go to the default option in the squidGuard.conf file.<\/em><\/p>\n<h3>Optional:Adding SSL Interception\/Inspection Support<\/h3>\n<p>This next section allows your Squid proxy server to intercept SSL connection made from your clients. <strong>Warning!<\/strong> Doing so will most likely look like a man-in-the-middle attack. Clients will be connecting to your proxy server when trying to go to SSL protected sites, thus violating the SSL transaction. For example, a client opens up a connection to https:\/\/mail.google.com. This connection will be intercepted by the proxy server, which does not contain googles private SSL key. A untrusted mismatch will occur. I would like to also note that you should consider the behaviour you are trying to achieve with having SSL connections proxy&#8217;d through Squid. The nature of SSL does not allow us to easily perform Proxy featu3res, such as caching, content filter, content manipulation, etc. Therefore, if you are setting up SSL pass-through with squid, then you are effectively doing the same thing that a router would. In conclusion, the only reasons I can think of for enabling SSL Interception would be for auditing and monitoring purposes. For example, you are willing to allow the use of 3rd party Web Email (GMAIL, YAHOO) to your employees, but you would require that the users are monitored to prevent data leakage, etc.<\/p>\n<p>For my purposes of a guest network, this was okay behaviour. In an enterprise, you would need additional steps to install trust between you and your users.<\/p>\n<p><strong>Creating our Self-Signed SSL Cert:<\/strong> (<a href=\"http:\/\/www.sslshopper.com\/article-how-to-create-and-install-an-apache-self-signed-certificate.html\" target=\"_blank\" rel=\"noopener noreferrer\">See here<\/a>)<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">cd \/etc\/squid3\/\nmkdir certs\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout squid.pem -out squid.pem<\/pre>\n<p><strong>NOTICE: <\/strong><em>You will be prompted to answer some information about the certificate, such as geographic location, common name. Fill in as you see fit.<\/em><\/p>\n<p><strong>Enabling SSL-Bump:<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vi \/etc\/squid3\/squid.conf\n#Add the following...\nhttp_port {squidip}:8443 ssl-bump cert=\/etc\/squid3\/certs\/squid.pem key=\/etc\/squid3\/certs\/squid.pem\n<\/pre>\n<p><strong>NOTICE:<\/strong> <em>Users will get a certificate miss match with the SSL enabled sites they try to visit. They will have to add exceptions to trust the self-signed cert from above for each site. Again this may not be desired behavior. Consult with you PKI engineer for ways to do this in an enterprise setting where you may have an authoritative CA that can vouch for your clients.<\/em><\/p>\n<h3>Final Thoughts..<\/h3>\n<p>There are still some issues if attempting to deploy this in a production environment. Transparent NAT security issues, issues with filtering by Source IP, SSL requiring SSL-Bump, etc. I will post another article soon when I have fine tuned these.<\/p>\n<p>Cheers!<\/p>\n<h3>Sources:<\/h3>\n<ul>\n<li><a href=\"http:\/\/www.d90.us\/toolbox\/2009\/05\/26\/adding-ssl-support-to-squid-package-on-ubuntu\/\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/www.d90.us\/toolbox\/2009\/05\/26\/adding-ssl-support-to-squid-package-on-ubuntu\/<\/a><\/li>\n<li><a href=\"http:\/\/wiki.squid-cache.org\/SquidFaq\/CompilingSquid#Do_you_have_pre-compiled_binaries_available.3F\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/wiki.squid-cache.org\/SquidFaq\/CompilingSquid#Do_you_have_pre-compiled_binaries_available.3F<\/a><\/li>\n<li><a href=\"http:\/\/www.squid-cache.org\/Doc\/config\/https_port\/\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/www.squid-cache.org\/Doc\/config\/https_port\/<\/a><\/li>\n<li><a href=\"http:\/\/wiki.squid-cache.org\/Features\/DynamicSslCert\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/wiki.squid-cache.org\/Features\/DynamicSslCert<\/a><\/li>\n<li><a href=\"http:\/\/wiki.squid-cache.org\/Features\/SslBump\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/wiki.squid-cache.org\/Features\/SslBump<\/a><\/li>\n<li><a href=\"http:\/\/wiki.squid-cache.org\/Features\/HTTPS\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/wiki.squid-cache.org\/Features\/HTTPS<\/a><\/li>\n<li><a href=\"http:\/\/ubuntuforums.org\/showthread.php?t=2049290\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/ubuntuforums.org\/showthread.php?t=2049290<\/a><\/li>\n<li><a href=\"http:\/\/www.mydlp.com\/http-and-https-redirecting-with-netfilter-iptables\/\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/www.mydlp.com\/http-and-https-redirecting-with-netfilter-iptables\/<\/a><\/li>\n<li><a href=\"http:\/\/www.howtoforge.com\/squid-delay-pools-bandwidth-management\" target=\"_blank\" rel=\"noopener noreferrer\">http:\/\/www.howtoforge.com\/squid-delay-pools-bandwidth-management<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Hello, hello! Recently I posted a two part article on creating a Guest wireless network using OpenWRT, VLANs, and Firewall rules. Now we left things kinda open from a security standpoint. WE gave our Guest&#46;&#46;&#46;<\/p>\n","protected":false},"author":2,"featured_media":4240,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[86],"class_list":["post-993","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-linux","tag-linux"],"_links":{"self":[{"href":"https:\/\/infotechguy.net\/index.php?rest_route=\/wp\/v2\/posts\/993","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/infotechguy.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/infotechguy.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/infotechguy.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/infotechguy.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=993"}],"version-history":[{"count":1,"href":"https:\/\/infotechguy.net\/index.php?rest_route=\/wp\/v2\/posts\/993\/revisions"}],"predecessor-version":[{"id":4170,"href":"https:\/\/infotechguy.net\/index.php?rest_route=\/wp\/v2\/posts\/993\/revisions\/4170"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/infotechguy.net\/index.php?rest_route=\/wp\/v2\/media\/4240"}],"wp:attachment":[{"href":"https:\/\/infotechguy.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=993"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/infotechguy.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=993"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/infotechguy.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=993"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}